Sample records for three-dimensional 3d visualization

  1. Experimental Evidence for Improved Neuroimaging Interpretation Using Three-Dimensional Graphic Models

    ERIC Educational Resources Information Center

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more…

  2. [Three-dimensional morphological modeling and visualization of wheat root system].

    PubMed

    Tan, Feng; Tang, Liang; Hu, Jun-Cheng; Jiang, Hai-Yan; Cao, Wei-Xing; Zhu, Yan

    2011-01-01

    Crop three-dimensional (3D) morphological modeling and visualization is an important part of digital plant study. This paper aimed to develop a 3D morphological model of wheat root system based on the parameters of wheat root morphological features, and to realize the visualization of wheat root growth. According to the framework of visualization technology for wheat root growth, a 3D visualization model of wheat root axis, including root axis growth model, branch geometric model, and root axis curve model, was developed firstly. Then, by integrating root topology, the corresponding pixel was determined, and the whole wheat root system was three-dimensionally re-constructed by using the morphological feature parameters in the root morphological model. Finally, based on the platform of OpenGL, and by integrating the technologies of texture mapping, lighting rendering, and collision detection, the 3D visualization of wheat root growth was realized. The 3D output of wheat root system from the model was vivid, which could realize the 3D root system visualization of different wheat cultivars under different water regimes and nitrogen application rates. This study could lay a technical foundation for further development of an integral visualization system of wheat plant.

  3. Integration of Computed Tomography and Three-Dimensional Echocardiography for Hybrid Three-Dimensional Printing in Congenital Heart Disease.

    PubMed

    Gosnell, Jordan; Pietila, Todd; Samuel, Bennett P; Kurup, Harikrishnan K N; Haw, Marcus P; Vettukattil, Joseph J

    2016-12-01

    Three-dimensional (3D) printing is an emerging technology aiding diagnostics, education, and interventional, and surgical planning in congenital heart disease (CHD). Three-dimensional printing has been derived from computed tomography, cardiac magnetic resonance, and 3D echocardiography. However, individually the imaging modalities may not provide adequate visualization of complex CHD. The integration of the strengths of two or more imaging modalities has the potential to enhance visualization of cardiac pathomorphology. We describe the feasibility of hybrid 3D printing from two imaging modalities in a patient with congenitally corrected transposition of the great arteries (L-TGA). Hybrid 3D printing may be useful as an additional tool for cardiologists and cardiothoracic surgeons in planning interventions in children and adults with CHD.

  4. Three-dimensional versus two-dimensional ultrasound for assessing levonorgestrel intrauterine device location: A pilot study.

    PubMed

    Andrade, Carla Maria Araujo; Araujo Júnior, Edward; Torloni, Maria Regina; Moron, Antonio Fernandes; Guazzelli, Cristina Aparecida Falbo

    2016-02-01

    To compare the rates of success of two-dimensional (2D) and three-dimensional (3D) sonographic (US) examinations in locating and adequately visualizing levonorgestrel intrauterine devices (IUDs) and to explore factors associated with the unsuccessful viewing on 2D US. Transvaginal 2D and 3D US examinations were performed on all patients 1 month after insertion of levonorgestrel IUDs. The devices were considered adequately visualized on 2D US if both the vertical (shadow, upper and lower extremities) and the horizontal (two echogenic lines) shafts were identified. 3D volumes were also captured to assess the location of levonorgestrel IUDs on 3D US. Thirty women were included. The rates of adequate device visualization were 40% on 2D US (95% confidence interval [CI], 24.6; 57.7) and 100% on 3D US (95% CI, 88.6; 100.0). The device was not adequately visualized in all six women who had a retroflexed uterus, but it was adequately visualized in 12 of the 24 women (50%) who had a nonretroflexed uterus (95% CI, -68.6; -6.8). We found that 3D US is better than 2D US for locating and adequately visualizing levonorgestrel IUDs. Other well-designed studies with adequate power should be conducted to confirm this finding. © 2015 Wiley Periodicals, Inc.

  5. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  6. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  7. How Students Solve Problems in Spatial Geometry while Using a Software Application for Visualizing 3D Geometric Objects

    ERIC Educational Resources Information Center

    Widder, Mirela; Gorsky, Paul

    2013-01-01

    In schools, learning spatial geometry is usually dependent upon a student's ability to visualize three dimensional geometric configurations from two dimensional drawings. Such a process, however, often creates visual obstacles which are unique to spatial geometry. Useful software programs which realistically depict three dimensional geometric…

  8. Experimental evidence for improved neuroimaging interpretation using three-dimensional graphic models.

    PubMed

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.

  9. The role of three-dimensional visualization in robotics-assisted cardiac surgery

    NASA Astrophysics Data System (ADS)

    Currie, Maria; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W. A.; Patel, Rajni; Peters, Terry; Kiaii, Bob

    2012-02-01

    Objectives: The purpose of this study was to determine the effect of three-dimensional (3D) versus two-dimensional (2D) visualization on the amount of force applied to mitral valve tissue during robotics-assisted mitral valve annuloplasty, and the time to perform the procedure in an ex vivo animal model. In addition, we examined whether these effects are consistent between novices and experts in robotics-assisted cardiac surgery. Methods: A cardiac surgery test-bed was constructed to measure forces applied by the da Vinci surgical system (Intuitive Surgical, Sunnyvale, CA) during mitral valve annuloplasty. Both experts and novices completed roboticsassisted mitral valve annuloplasty with 2D and 3D visualization. Results: The mean time for both experts and novices to suture the mitral valve annulus and to tie sutures using 3D visualization was significantly less than that required to suture the mitral valve annulus and to tie sutures using 2D vision (p∠0.01). However, there was no significant difference in the maximum force applied by novices to the mitral valve during suturing (p = 0.3) and suture tying (p = 0.6) using either 2D or 3D visualization. Conclusion: This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery. Keywords: Robotics-assisted surgery, visualization, cardiac surgery

  10. Development of a system for acquiring, reconstructing, and visualizing three-dimensional ultrasonic angiograms

    NASA Astrophysics Data System (ADS)

    Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.

    1995-04-01

    We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.

  11. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  12. Web-based three-dimensional geo-referenced visualization

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Gong, Jianhua; Wang, Freeman

    1999-12-01

    This paper addresses several approaches to implementing web-based, three-dimensional (3-D), geo-referenced visualization. The discussion focuses on the relationship between multi-dimensional data sets and applications, as well as the thick/thin client and heavy/light server structure. Two models of data sets are addressed in this paper. One is the use of traditional 3-D data format such as 3-D Studio Max, Open Inventor 2.0, Vis5D and OBJ. The other is modelled by a web-based language such as VRML. Also, traditional languages such as C and C++, as well as web-based programming tools such as Java, Java3D and ActiveX, can be used for developing applications. The strengths and weaknesses of each approach are elaborated. Four practical solutions for using VRML and Java, Java and Java3D, VRML and ActiveX and Java wrapper classes (Java and C/C++), to develop applications are presented for web-based, real-time interactive and explorative visualization.

  13. A study to evaluate the reliability of using two-dimensional photographs, three-dimensional images, and stereoscopic projected three-dimensional images for patient assessment.

    PubMed

    Zhu, S; Yang, Y; Khambay, B

    2017-03-01

    Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  14. Three-Dimensional Interpretation of Sculptural Heritage with Digital and Tangible 3D Printed Replicas

    ERIC Educational Resources Information Center

    Saorin, José Luis; Carbonell-Carrera, Carlos; Cantero, Jorge de la Torre; Meier, Cecile; Aleman, Drago Diaz

    2017-01-01

    Spatial interpretation features as a skill to acquire in the educational curricula. The visualization and interpretation of three-dimensional objects in tactile devices and the possibility of digital manufacturing with 3D printers, offers an opportunity to include replicas of sculptures in teaching and, thus, facilitate the 3D interpretation of…

  15. Comparing the Microsoft Kinect to a traditional mouse for adjusting the viewed tissue densities of three-dimensional anatomical structures

    NASA Astrophysics Data System (ADS)

    Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot

    2013-03-01

    Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.

  16. Spatial Reasoning with External Visualizations: What Matters Is What You See, Not whether You Interact

    ERIC Educational Resources Information Center

    Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.

    2008-01-01

    Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…

  17. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    PubMed

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  18. Three-Dimensional Display Technologies for Anatomical Education: A Literature Review

    ERIC Educational Resources Information Center

    Hackett, Matthew; Proctor, Michael

    2016-01-01

    Anatomy is a foundational component of biological sciences and medical education and is important for a variety of clinical tasks. To augment current curriculum and improve students' spatial knowledge of anatomy, many educators, anatomists, and researchers use three-dimensional (3D) visualization technologies. This article reviews 3D display…

  19. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  20. Dynamic three-dimensional display of common congenital cardiac defects from reconstruction of two-dimensional echocardiographic images.

    PubMed

    Hsieh, K S; Lin, C C; Liu, W S; Chen, F L

    1996-01-01

    Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.

  1. Real-time catheter localization and visualization using three-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil

    2017-03-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.

  2. Software applications to three-dimensional visualization of forest landscapes -- A case study demontrating the use of visual nature studio (VNS) in visualizing fire spread in forest landscapes

    Treesearch

    Brian J. Williams; Bo Song; Chou Chiao-Ying; Thomas M. Williams; John Hom

    2010-01-01

    Three-dimensional (3D) visualization is a useful tool that depicts virtual forest landscapes on computer. Previous studies in visualization have required high end computer hardware and specialized technical skills. A virtual forest landscape can be used to show different effects of disturbances and management scenarios on a computer, which allows observation of forest...

  3. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    PubMed

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  4. Three-dimensional visualization of the craniofacial patient: volume segmentation, data integration and animation.

    PubMed

    Enciso, R; Memon, A; Mah, J

    2003-01-01

    The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw.

  5. Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.

    PubMed

    Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn

    2016-12-21

    The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.

  6. Evaluating mental workload of two-dimensional and three-dimensional visualization for anatomical structure localization.

    PubMed

    Foo, Jung-Leng; Martinez-Escobar, Marisol; Juhnke, Bethany; Cassidy, Keely; Hisley, Kenneth; Lobe, Thom; Winer, Eliot

    2013-01-01

    Visualization of medical data in three-dimensional (3D) or two-dimensional (2D) views is a complex area of research. In many fields 3D views are used to understand the shape of an object, and 2D views are used to understand spatial relationships. It is unclear how 2D/3D views play a role in the medical field. Using 3D views can potentially decrease the learning curve experienced with traditional 2D views by providing a whole representation of the patient's anatomy. However, there are challenges with 3D views compared with 2D. This current study expands on a previous study to evaluate the mental workload associated with both 2D and 3D views. Twenty-five first-year medical students were asked to localize three anatomical structures--gallbladder, celiac trunk, and superior mesenteric artery--in either 2D or 3D environments. Accuracy and time were taken as the objective measures for mental workload. The NASA Task Load Index (NASA-TLX) was used as a subjective measure for mental workload. Results showed that participants viewing in 3D had higher localization accuracy and a lower subjective measure of mental workload, specifically, the mental demand component of the NASA-TLX. Results from this study may prove useful for designing curricula in anatomy education and improving training procedures for surgeons.

  7. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  8. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  9. 3DScapeCS: application of three dimensional, parallel, dynamic network visualization in Cytoscape

    PubMed Central

    2013-01-01

    Background The exponential growth of gigantic biological data from various sources, such as protein-protein interaction (PPI), genome sequences scaffolding, Mass spectrometry (MS) molecular networking and metabolic flux, demands an efficient way for better visualization and interpretation beyond the conventional, two-dimensional visualization tools. Results We developed a 3D Cytoscape Client/Server (3DScapeCS) plugin, which adopted Cytoscape in interpreting different types of data, and UbiGraph for three-dimensional visualization. The extra dimension is useful in accommodating, visualizing, and distinguishing large-scale networks with multiple crossed connections in five case studies. Conclusions Evaluation on several experimental data using 3DScapeCS and its special features, including multilevel graph layout, time-course data animation, and parallel visualization has proven its usefulness in visualizing complex data and help to make insightful conclusions. PMID:24225050

  10. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  11. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography

    PubMed Central

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Åke; Winter, Reidar

    2009-01-01

    Background Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant. PMID:19706183

  12. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  13. Affective three-dimensional brain-computer interface created using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  14. 3D reconstruction techniques made easy: know-how and pictures.

    PubMed

    Luccichenti, Giacomo; Cademartiri, Filippo; Pezzella, Francesca Romana; Runza, Giuseppe; Belgrano, Manuel; Midiri, Massimo; Sabatini, Umberto; Bastianello, Stefano; Krestin, Gabriel P

    2005-10-01

    Three-dimensional reconstructions represent a visual-based tool for illustrating the basis of three-dimensional post-processing such as interpolation, ray-casting, segmentation, percentage classification, gradient calculation, shading and illumination. The knowledge of the optimal scanning and reconstruction parameters facilitates the use of three-dimensional reconstruction techniques in clinical practise. The aim of this article is to explain the principles of multidimensional image processing in a pictorial way and the advantages and limitations of the different possibilities of 3D visualisation.

  15. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  16. Three-dimensional Visualization of Ultrasound Backscatter Statistics by Window-modulated Compounding Nakagami Imaging.

    PubMed

    Zhou, Zhuhuang; Wu, Shuicai; Lin, Man-Yen; Fang, Jui; Liu, Hao-Li; Tsui, Po-Hsiang

    2018-05-01

    In this study, the window-modulated compounding (WMC) technique was integrated into three-dimensional (3D) ultrasound Nakagami imaging for improving the spatial visualization of backscatter statistics. A 3D WMC Nakagami image was produced by summing and averaging a number of 3D Nakagami images (number of frames denoted as N) formed using sliding cubes with varying side lengths ranging from 1 to N times the transducer pulse. To evaluate the performance of the proposed 3D WMC Nakagami imaging method, agar phantoms with scatterer concentrations ranging from 2 to 64 scatterers/mm 3 were made, and six stages of fatty liver (zero, one, two, four, six, and eight weeks) were induced in rats by methionine-choline-deficient diets (three rats for each stage, total n = 18). A mechanical scanning system with a 5-MHz focused single-element transducer was used for ultrasound radiofrequency data acquisition. The experimental results showed that 3D WMC Nakagami imaging was able to characterize different scatterer concentrations. Backscatter statistics were visualized with various numbers of frames; N = 5 reduced the estimation error of 3D WMC Nakagami imaging in visualizing the backscatter statistics. Compared with conventional 3D Nakagami imaging, 3D WMC Nakagami imaging improved the image smoothness without significant image resolution degradation, and it can thus be used for describing different stages of fatty liver in rats.

  17. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  18. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    NASA Astrophysics Data System (ADS)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  19. Three-Dimensional Soil Landscape Modeling: A Potential Earth Science Teaching Tool

    ERIC Educational Resources Information Center

    Schmid, Brian M.; Manu, Andrew; Norton, Amy E.

    2009-01-01

    Three-dimensional visualization is helpful in understanding soils, and three dimensional (3-D) tools are gaining popularity in teaching earth sciences. Those tools are still somewhat underused in soil science, yet soil properties such as texture, color, and organic carbon content vary both vertically and horizontally across the landscape. These…

  20. Visualization of Potential Energy Function Using an Isoenergy Approach and 3D Prototyping

    ERIC Educational Resources Information Center

    Teplukhin, Alexander; Babikov, Dmitri

    2015-01-01

    In our three-dimensional world, one can plot, see, and comprehend a function of two variables at most, V(x,y). One cannot plot a function of three or more variables. For this reason, visualization of the potential energy function in its full dimensionality is impossible even for the smallest polyatomic molecules, such as triatomics. This creates…

  1. Map-Reading Skill Development with 3D Technologies

    ERIC Educational Resources Information Center

    Carbonell Carrera, Carlos; Avarvarei, Bogdan Vlad; Chelariu, Elena Liliana; Draghia, Lucia; Avarvarei, Simona Catrinel

    2017-01-01

    Landforms often are represented on maps using abstract cartographic techniques that the reader must interpret for successful three-dimensional terrain visualization. New technologies in 3D landscape representation, both digital and tangible, offer the opportunity to visualize terrain in new ways. The results of a university student workshop, in…

  2. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    PubMed

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  3. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  4. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human

  5. Usefulness of real-time three-dimensional ultrasonography in percutaneous nephrostomy: an animal study.

    PubMed

    Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang

    2018-05-17

    To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  6. Computational techniques to enable visualizing shapes of objects of extra spatial dimensions

    NASA Astrophysics Data System (ADS)

    Black, Don Vaughn, II

    Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.

  7. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  8. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  9. Three-dimensional display technologies

    PubMed Central

    Geng, Jason

    2014-01-01

    The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain’s power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies. PMID:25530827

  10. Breast sentinel lymph node navigation with three-dimensional computed tomography-lymphography: a 12-year study.

    PubMed

    Yamamoto, Shigeru; Suga, Kazuyoshi; Maeda, Kazunari; Maeda, Noriko; Yoshimura, Kiyoshi; Oka, Masaaki

    2016-05-01

    To evaluate the utility of three-dimensional (3D) computed tomography (CT)-lymphography (LG) breast sentinel lymph node navigation in our institute. Between 2002 and 2013, we preoperatively identified sentinel lymph nodes (SLNs) in 576 clinically node-negative breast cancer patients with T1 and T2 breast cancer using 3D CT-LG method. SLN biopsy (SLNB) was performed in 557 of 576 patients using both the images of 3D CT-LG for guidance and the blue dye method. Using 3D CT-LG, SLNs were visualized in 569 (99%) of 576 patients. Of 569 patients, both lymphatic draining ducts and SLNs from the peritumoral and periareolar areas were visualized in 549 (96%) patients. Only SLNs without lymphatic draining ducts were visualized in 20 patients. Drainage lymphatic pathways visualized with 3D CT-LG (549 cases) were classified into four patterns: single route/single SLN (355 cases, 65%), multiple routes/single SLN (59 cases, 11%) single route/multiple SLNs (62 cases, 11%) and multiple routes/multiple SLNs (73 cases, 13%). SLNs were detected in 556 (99.8%) of 557 patients during SLNB. CT-LG is useful for preoperative visualization of SLNs and breast lymphatic draining routes. This preoperative method should contribute greatly to the easy detection of SLNs during SLNB.

  11. Three-Dimensional Media Technologies: Potentials for Study in Visual Literacy.

    ERIC Educational Resources Information Center

    Thwaites, Hal

    This paper presents an overview of three-dimensional media technologies (3Dmt). Many of the new 3Dmt are the direct result of interactions of computing, communications, and imaging technologies. Computer graphics are particularly well suited to the creation of 3D images due to the high resolution and programmable nature of the current displays.…

  12. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  13. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

  14. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  15. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  16. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  17. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    ERIC Educational Resources Information Center

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  18. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  19. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  20. [Usefulness of computed tomography with three-dimensional reconstructions in visualization of cervical spine malformation of a child with Sprengel's deformity].

    PubMed

    Wawrzynek, Wojciech; Siemianowicz, Anna; Koczy, Bogdan; Kasprowska, Sabina; Besler, Krzysztof

    2005-01-01

    The Sprengel's deformity is a congenital anomaly of the shoulder girdle with an elevation of the scapula and limitation of movement of the shoulder. Sprengel's deformity is frequently associated with cervical spine malformations such as: spinal synostosis, spina bifida and an abnormal omovertebral fibrous, cartilaginous or osseus connection. The diagnosis of Sprengel's deformity is based on a clinical examination and radiological procedures. In every case of Sprengel's deformity plain radiography and computed tomography should be performed. Three-dimensional (3D) reconstructions allow to visualize precise topography and spatial proportions of examined bone structures. 3D reconstruction also enables an optional rotation of visualized bone structures in order to clarify the anatomical abnormalities and to plan surgical treatment.

  1. High-resolution gadolinium-enhanced 3D MRA of the infrapopliteal arteries. Lessons for improving bolus-chase peripheral MRA.

    PubMed

    Hood, Maureen N; Ho, Vincent B; Foo, Thomas K F; Marcos, Hani B; Hess, Sandra L; Choyke, Peter L

    2002-09-01

    Peripheral magnetic resonance angiography (MRA) is growing in use. However, methods of performing peripheral MRA vary widely and continue to be optimized, especially for improvement in illustration of infrapopliteal arteries. The main purpose of this project was to identify imaging factors that can improve arterial visualization in the lower leg using bolus chase peripheral MRA. Eighteen healthy adults were imaged on a 1.5T MR scanner. The calf was imaged using conventional three-station bolus chase three-dimensional (3D) MRA, two dimensional (2D) time-of-flight (TOF) MRA and single-station Gadolinium (Gd)-enhanced 3D MRA. Observer comparisons of vessel visualization, signal to noise ratios (SNR), contrast to noise ratios (CNR) and spatial resolution comparisons were performed. Arterial SNR and CNR were similar for all three techniques. However, arterial visualization was dramatically improved on dedicated, arterial-phase Gd-enhanced 3D MRA compared with the multi-station bolus chase MRA and 2D TOF MRA. This improvement was related to optimization of Gd-enhanced 3D MRA parameters (fast injection rate of 2 mL/sec, high spatial resolution imaging, the use of dedicated phased array coils, elliptical centric k-space sampling and accurate arterial phase timing for image acquisition). The visualization of the infrapopliteal arteries can be substantially improved in bolus chase peripheral MRA if voxel size, contrast delivery, and central k-space data acquisition for arterial enhancement are optimized. Improvements in peripheral MRA should be directed at these parameters.

  2. Three-dimensional portable document format: a simple way to present 3-dimensional data in an electronic publication.

    PubMed

    Danz, Jan C; Katsaros, Christos

    2011-08-01

    Three-dimensional (3D) models of teeth and soft and hard tissues are tessellated surfaces used for diagnosis, treatment planning, appliance fabrication, outcome evaluation, and research. In scientific publications or communications with colleagues, these 3D data are often reduced to 2-dimensional pictures or need special software for visualization. The portable document format (PDF) offers a simple way to interactively display 3D surface data without additional software other than a recent version of Adobe Reader (Adobe, San Jose, Calif). The purposes of this article were to give an example of how 3D data and their analyses can be interactively displayed in 3 dimensions in electronic publications, and to show how they can be exported from any software for diagnostic reports and communications among colleagues. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  3. A Novel Three-Dimensional Tool for Teaching Human Neuroanatomy

    ERIC Educational Resources Information Center

    Estevez, Maureen E.; Lindgren, Kristen A.; Bergethon, Peter R.

    2010-01-01

    Three-dimensional (3D) visualization of neuroanatomy can be challenging for medical students. This knowledge is essential in order for students to correlate cross-sectional neuroanatomy and whole brain specimens within neuroscience curricula and to interpret clinical and radiological information as clinicians or researchers. This study implemented…

  4. Visualization of Stereoscopic Anatomic Models of the Paranasal Sinuses and Cervical Vertebrae from the Surgical and Procedural Perspective

    ERIC Educational Resources Information Center

    Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei

    2017-01-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…

  5. [3D visualization and analysis of vocal fold dynamics].

    PubMed

    Bohr, C; Döllinger, M; Kniesburges, S; Traxdorf, M

    2016-04-01

    Visual investigation methods of the larynx mainly allow for the two-dimensional presentation of the three-dimensional structures of the vocal fold dynamics. The vertical component of the vocal fold dynamics is often neglected, yielding a loss of information. The latest studies show that the vertical dynamic components are in the range of the medio-lateral dynamics and play a significant role within the phonation process. This work presents a method for future 3D reconstruction and visualization of endoscopically recorded vocal fold dynamics. The setup contains a high-speed camera (HSC) and a laser projection system (LPS). The LPS projects a regular grid on the vocal fold surfaces and in combination with the HSC allows a three-dimensional reconstruction of the vocal fold surface. Hence, quantitative information on displacements and velocities can be provided. The applicability of the method is presented for one ex-vivo human larynx, one ex-vivo porcine larynx and one synthetic silicone larynx. The setup introduced allows the reconstruction of the entire visible vocal fold surfaces for each oscillation status. This enables a detailed analysis of the three dimensional dynamics (i. e. displacements, velocities, accelerations) of the vocal folds. The next goal is the miniaturization of the LPS to allow clinical in-vivo analysis in humans. We anticipate new insight on dependencies between 3D dynamic behavior and the quality of the acoustic outcome for healthy and disordered phonation.

  6. MT3DMS: A Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems; Documentation and User’s Guide

    DTIC Science & Technology

    1999-12-01

    addition, the data files saved in the POINT format can include an optional header which is compatible with Amtec Engineering’s 2-D and 3-D visualization...34.DAT" file so that the file can be used directly by Amtec Engineering’s 2-D and 3-D visualization package Tecplot©. The ARRAY and POINT formats are

  7. Three-Dimensional Liver Surgery Simulation: Computer-Assisted Surgical Planning with Three-Dimensional Simulation Software and Three-Dimensional Printing.

    PubMed

    Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-06-01

    To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.

  8. Software Aids In Graphical Depiction Of Flow Data

    NASA Technical Reports Server (NTRS)

    Stegeman, J. D.

    1995-01-01

    Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).

  9. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  10. A three-dimensional visualization preoperative treatment planning system for microwave ablation in liver cancer: a simulated experimental study.

    PubMed

    Liu, Fangyi; Cheng, Zhigang; Han, Zhiyu; Yu, Xiaoling; Yu, Mingan; Liang, Ping

    2017-06-01

    To evaluate the application value of three-dimensional (3D) visualization preoperative treatment planning system (VPTPS) for microwave ablation (MWA) in liver cancer. The study was a simulated experimental study using the CT imaging data of patients in DICOM format in a model. Three students (who learn to interventional ultrasound for less than 1 year) and three experts (who have more than 5 years of experience in ablation techniques) in MWA performed the preoperative planning for 39 lesions (mean diameter 3.75 ± 1.73 cm) of 32 patients using two-dimensional (2D) image planning method and 3D VPTPS, respectively. The number of planning insertions, planning ablation rate, and damage rate to surrounding structures were compared between2D image planning group and 3D VPTPS group. There were fewer planning insertions, lower ablation rate and higher damage rate to surrounding structures in 2D image planning group than 3D VPTPS group for both students and experts. When using the 2D ultrasound planning method, students could carry out fewer planning insertions and had a lower ablation rate than the experts (p < 0.001). However, there was no significant difference in planning insertions, the ablation rate, and the incidence of damage to the surrounding structures between students and experts using 3D VPTPS. 3DVPTPS enables inexperienced physicians to have similar preoperative planning results to experts, and enhances students' preoperative planning capacity, which may improve the therapeutic efficacy and reduce the complication of MWA.

  11. The 3D LAOKOON--Visual and Verbal in 3D Online Learning Environments.

    ERIC Educational Resources Information Center

    Liestol, Gunnar

    This paper reports on a project where three-dimensional (3D) online gaming environments were exploited for the purpose of academic communication and learning. 3D gaming environments are media and meaning rich and can provide inexpensive solutions for educational purposes. The experiment with teaching and discussions in this setting, however,…

  12. Lifting business process diagrams to 2.5 dimensions

    NASA Astrophysics Data System (ADS)

    Effinger, Philip; Spielmann, Johannes

    2010-01-01

    In this work, we describe our visualization approach for business processes using 2.5 dimensional techniques (2.5D). The idea of 2.5D is to add the concept of layering to a two dimensional (2D) visualization. The layers are arranged in a three-dimensional display space. For the modeling of the business processes, we use the Business Process Modeling Notation (BPMN). The benefit of connecting BPMN with a 2.5D visualization is not only to obtain a more abstract view on the business process models but also to develop layering criteria that eventually increase readability of the BPMN model compared to 2D. We present a 2.5D Navigator for BPMN models that offers different perspectives for visualization. Therefore we also develop BPMN specific perspectives. The 2.5D Navigator combines the 2.5D approach with perspectives and allows free navigation in the three dimensional display space. We also demonstrate our tool and libraries used for implementation of the visualizations. The underlying general framework for 2.5D visualizations is explored and presented in a fashion that it can easily be used for different applications. Finally, an evaluation of our navigation tool demonstrates that we can achieve satisfying and aesthetic displays of diagrams stating BPMN models in 2.5D-visualizations.

  13. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  14. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  15. A Java tool for dynamic web-based 3D visualization of anatomy and overlapping gene or protein expression patterns.

    PubMed

    Gerth, Victor E; Vize, Peter D

    2005-04-01

    The Gene Expression Viewer is a web-launched three-dimensional visualization tool, tailored to compare surface reconstructions of multi-channel image volumes generated by confocal microscopy or micro-CT.

  16. Obstructed bi-leaflet prosthetic mitral valve imaging with real-time three-dimensional transesophageal echocardiography.

    PubMed

    Shimbo, Mai; Watanabe, Hiroyuki; Kimura, Shunsuke; Terada, Mai; Iino, Takako; Iino, Kenji; Ito, Hiroshi

    2015-01-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) can provide unique visualization and better understanding of the relationship among cardiac structures. Here, we report the case of an 85-year-old woman with an obstructed mitral prosthetic valve diagnosed promptly by RT3D-TEE, which clearly showed a leaflet stuck in the closed position. The opening and closing angles of the valve leaflets measured by RT3D-TEE were compatible with those measured by fluoroscopy. Moreover, RT3D-TEE revealed, in the ring of the prosthetic valve, thrombi that were not visible on fluoroscopy. RT3D-TEE might be a valuable diagnostic technique for prosthetic mitral valve thrombosis. © 2014 Wiley Periodicals, Inc.

  17. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    NASA Astrophysics Data System (ADS)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  18. Vertical visual features have a strong influence on cuttlefish camouflage.

    PubMed

    Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T

    2013-04-01

    Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.

  19. "Let's Get Physical": Advantages of a Physical Model over 3D Computer Models and Textbooks in Learning Imaging Anatomy

    ERIC Educational Resources Information Center

    Preece, Daniel; Williams, Sarah B.; Lam, Richard; Weller, Renate

    2013-01-01

    Three-dimensional (3D) information plays an important part in medical and veterinary education. Appreciating complex 3D spatial relationships requires a strong foundational understanding of anatomy and mental 3D visualization skills. Novel learning resources have been introduced to anatomy training to achieve this. Objective evaluation of their…

  20. Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.

    PubMed

    Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn

    2016-10-01

    The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.

  1. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    PubMed Central

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  3. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  4. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  5. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  6. Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography.

    PubMed

    Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G; Ko, Tony; Schuman, Joel S; Kowalczyk, Andrzej; Duker, Jay S

    2005-10-01

    To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of approximately 2 microm compared with standard 10-microm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment.

  7. Three dimensional audio versus head down TCAS displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Pittman, Marc T.

    1994-01-01

    The advantage of a head up auditory display was evaluated in an experiment designed to measure and compare the acquisition time for capturing visual targets under two conditions: Standard head down traffic collision avoidance system (TCAS) display, and three-dimensional (3-D) audio TCAS presentation. Ten commercial airline crews were tested under full mission simulation conditions at the NASA Ames Crew-Vehicle Systems Research Facility Advanced Concepts Flight Simulator. Scenario software generated targets corresponding to aircraft which activated a 3-D aural advisory or a TCAS advisory. Results showed a significant difference in target acquisition time between the two conditions, favoring the 3-D audio TCAS condition by 500 ms.

  8. A three-dimensional radiation image display on a real space image created via photogrammetry

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Ozawa, S.; Tanifuji, Y.; Torii, T.

    2018-03-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the occurrence of a large tsunami caused by the Great East Japan Earthquake of March 11, 2011. The radiation distribution measurements inside the FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a three-dimensional (3D) image reconstruction method for radioactive substances using a compact Compton camera. Moreover, we succeeded in visually recognizing the position of radioactive substances in real space by the integration of 3D radiation images and the 3D photo-model created using photogrammetry.

  9. Visualization of Sources in the Universe

    NASA Astrophysics Data System (ADS)

    Kafatos, M.; Cebral, J. R.

    1993-12-01

    We have begun to develop a series of visualization tools of importance to the display of astronomical data and have applied these to the visualization of cosmological sources in the recently formed Institute for Computational Sciences and Informatics at GMU. One can use a three-dimensional perspective plot of the density surface for three dimensional data and in this case the iso-level contours are three- dimensional surfaces. Sophisticated rendering algorithms combined with multiple source lighting allow us to look carefully at such density contours and to see fine structure on the surface of the density contours. Stereoscopic and transparent rendering can give an even more sophisticated approach with multi-layered surfaces providing information at different levels. We have applied these methods to looking at density surfaces of 3-D data such as 100 clusters of galaxies and 2500 galaxies in the CfA redshift survey. Our plots presented are based on three variables, right ascension, declination and redshift. We have also obtained density structures in 2-D for the distribution of gamma-ray bursts (where distances are unknown) and the distribution of a variety of sources such as clusters of galaxies. Our techniques allow for correlations to be done visually.

  10. Fractal tomography and its application in 3D vision

    NASA Astrophysics Data System (ADS)

    Trubochkina, N.

    2018-01-01

    A three-dimensional artistic fractal tomography method that implements a non-glasses 3D visualization of fractal worlds in layered media is proposed. It is designed for the glasses-free 3D vision of digital art objects and films containing fractal content. Prospects for the development of this method in art galleries and the film industry are considered.

  11. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  12. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  13. Real-time three-dimensional transesophageal echocardiography in valve disease: comparison with surgical findings and evaluation of prosthetic valves.

    PubMed

    Sugeng, Lissa; Shernan, Stanton K; Weinert, Lynn; Shook, Doug; Raman, Jai; Jeevanandam, Valluvan; DuPont, Frank; Fox, John; Mor-Avi, Victor; Lang, Roberto M

    2008-12-01

    Recently, a novel real-time 3-dimensional (3D) matrix-array transesophageal echocardiographic (3D-MTEE) probe was found to be highly effective in the evaluation of native mitral valves (MVs) and other intracardiac structures, including the interatrial septum and left atrial appendage. However, the ability to visualize prosthetic valves using this transducer has not been evaluated. Moreover, the diagnostic accuracy of this new technology has never been validated against surgical findings. This study was designed to (1) assess the quality of 3D-MTEE images of prosthetic valves and (2) determine the potential value of 3D-MTEE imaging in the preoperative assessment of valvular pathology by comparing images with surgical findings. Eighty-seven patients undergoing clinically indicated transesophageal echocardiography were studied. In 40 patients, 3D-MTEE images of prosthetic MVs, aortic valves (AVs), and tricuspid valves (TVs) were scored for the quality of visualization. For both MVs and AVs, mechanical and bioprosthetic valves, the rings and leaflets were scored individually. In 47 additional patients, intraoperative 3D-MTEE diagnoses of MV pathology obtained before initiating cardiopulmonary bypass were compared with surgical findings. For the visualization of prosthetic MVs and annuloplasty rings, quality was superior compared with AV and TV prostheses. In addition, 3D-MTEE imaging had 96% agreement with surgical findings. Three-dimensional matrix-array transesophageal echocardiographic imaging provides superb imaging and accurate presurgical evaluation of native MV pathology and prostheses. However, the current technology is less accurate for the clinical assessment of AVs and TVs. Fast acquisition and immediate online display will make this the modality of choice for MV surgical planning and postsurgical follow-up.

  14. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  15. The Visible Human Project: From Body to Bits.

    PubMed

    Ackerman, Michael J

    2017-01-01

    Atlases of anatomy have long been a mainstay for visualizing and identifying features of the human body [1]. Many are constructed of idealized illustrations rendered so that structures are presented as three-dimensional (3-D) pictures. Others have employed photographs of actual dissections. Still others are composed of collections of artist renderings of organs or areas of interest. All rely on a basically two-dimensional (2-D) graphic display to depict and allow for a better understanding of a complicated 3-D structure.

  16. Real-time three-dimensional ultrasound-assisted axillary plexus block defines soft tissue planes.

    PubMed

    Clendenen, Steven R; Riutort, Kevin; Ladlie, Beth L; Robards, Christopher; Franco, Carlo D; Greengrass, Roy A

    2009-04-01

    Two-dimensional (2D) ultrasound is commonly used for regional block of the axillary brachial plexus. In this technical case report, we described a real-time three-dimensional (3D) ultrasound-guided axillary block. The difference between 2D and 3D ultrasound is similar to the difference between plain radiograph and computer tomography. Unlike 2D ultrasound that captures a planar image, 3D ultrasound technology acquires a 3D volume of information that enables multiple planes of view by manipulating the image without movement of the ultrasound probe. Observation of the brachial plexus in cross-section demonstrated distinct linear hyperechoic tissue structures (loose connective tissue) that initially inhibited the flow of the local anesthesia. After completion of the injection, we were able to visualize the influence of arterial pulsation on the spread of the local anesthesia. Possible advantages of this novel technology over current 2D methods are wider image volume and the capability to manipulate the planes of the image without moving the probe.

  17. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  18. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    PubMed

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  19. Three-Dimensional Printing: Basic Principles and Applications in Medicine and Radiology.

    PubMed

    Kim, Guk Bae; Lee, Sangwook; Kim, Haekang; Yang, Dong Hyun; Kim, Young-Hak; Kyung, Yoon Soo; Kim, Choung-Soo; Choi, Se Hoon; Kim, Bum Joon; Ha, Hojin; Kwon, Sun U; Kim, Namkug

    2016-01-01

    The advent of three-dimensional printing (3DP) technology has enabled the creation of a tangible and complex 3D object that goes beyond a simple 3D-shaded visualization on a flat monitor. Since the early 2000s, 3DP machines have been used only in hard tissue applications. Recently developed multi-materials for 3DP have been used extensively for a variety of medical applications, such as personalized surgical planning and guidance, customized implants, biomedical research, and preclinical education. In this review article, we discuss the 3D reconstruction process, touching on medical imaging, and various 3DP systems applicable to medicine. In addition, the 3DP medical applications using multi-materials are introduced, as well as our recent results.

  20. Creating Physical 3D Stereolithograph Models of Brain and Skull

    PubMed Central

    Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.

    2007-01-01

    The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879

  1. Intracranial MRA: single volume vs. multiple thin slab 3D time-of-flight acquisition.

    PubMed

    Davis, W L; Warnock, S H; Harnsberger, H R; Parker, D L; Chen, C X

    1993-01-01

    Single volume three-dimensional (3D) time-of-flight (TOF) MR angiography is the most commonly used noninvasive method for evaluating the intracranial vasculature. The sensitivity of this technique to signal loss from flow saturation limits its utility. A recently developed multislab 3D TOF technique, MOTSA, is less affected by flow saturation and would therefore be expected to yield improved vessel visualization. To study this hypothesis, intracranial MR angiograms were obtained on 10 volunteers using three techniques: MOTSA, single volume 3D TOF using a standard 4.9 ms TE (3D TOFA), and single volume 3D TOF using a 6.8 ms TE (3D TOFB). All three sets of axial source images and maximum intensity projection (MIP) images were reviewed. Each exam was evaluated for the number of intracranial vessels visualized. A total of 502 vessel segments were studied with each technique. With use of the MIP images, 86% of selected vessels were visualized with MOTSA, 64% with 3D TOFA (TE = 4.9 ms), and 67% with TOFB (TE = 6.8 ms). Similarly, with the axial source images, 91% of selected vessels were visualized with MOTSA, 77% with 3D TOFA (TE = 4.9 ms), and 82% with 3D TOFB (TE = 6.8 ms). There is improved visualization of selected intracranial vessels in normal volunteers with MOTSA as compared with single volume 3D TOF. These improvements are believed to be primarily a result of decreased sensitivity to flow saturation seen with the MOTSA technique. No difference in overall vessel visualization was noted for the two single volume 3D TOF techniques.

  2. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.

    PubMed

    Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi

    2014-10-20

    We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

  4. Do Three-dimensional Visualization and Three-dimensional Printing Improve Hepatic Segment Anatomy Teaching? A Randomized Controlled Study.

    PubMed

    Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Li, Jianyi; Huang, Wenhua

    2016-01-01

    Hepatic segment anatomy is difficult for medical students to learn. Three-dimensional visualization (3DV) is a useful tool in anatomy teaching, but current models do not capture haptic qualities. However, three-dimensional printing (3DP) can produce highly accurate complex physical models. Therefore, in this study we aimed to develop a novel 3DP hepatic segment model and compare the teaching effectiveness of a 3DV model, a 3DP model, and a traditional anatomical atlas. A healthy candidate (female, 50-years old) was recruited and scanned with computed tomography. After three-dimensional (3D) reconstruction, the computed 3D images of the hepatic structures were obtained. The parenchyma model was divided into 8 hepatic segments to produce the 3DV hepatic segment model. The computed 3DP model was designed by removing the surrounding parenchyma and leaving the segmental partitions. Then, 6 experts evaluated the 3DV and 3DP models using a 5-point Likert scale. A randomized controlled trial was conducted to evaluate the educational effectiveness of these models compared with that of the traditional anatomical atlas. The 3DP model successfully displayed the hepatic segment structures with partitions. All experts agreed or strongly agreed that the 3D models provided good realism for anatomical instruction, with no significant differences between the 3DV and 3DP models in each index (p > 0.05). Additionally, the teaching effects show that the 3DV and 3DP models were significantly better than traditional anatomical atlas in the first and second examinations (p < 0.05). Between the first and second examinations, only the traditional method group had significant declines (p < 0.05). A novel 3DP hepatic segment model was successfully developed. Both the 3DV and 3DP models could improve anatomy teaching significantly. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  5. Teaching 21st-Century Art Education in a "Virtual" Age: Art Cafe at Second Life

    ERIC Educational Resources Information Center

    Lu, Lilly

    2010-01-01

    The emerging three-dimensional (3D) virtual world (VW) technology offers great potential for teaching contemporary digital art and growing digital visual culture in 21st-century art education. Such online virtual worlds are built and conceptualized based on information visualization and visual metaphors. Recently, an increasing number of…

  6. Preparing Content-Rich Learning Environments with VPython and Excel, Controlled by Visual Basic for Applications

    ERIC Educational Resources Information Center

    Prayaga, Chandra

    2008-01-01

    A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…

  7. A three-dimensional bucking system for optimal bucking of Central Appalachian hardwoods

    Treesearch

    Jingxin Wang; Jingang Liu; Chris B. LeDoux

    2009-01-01

    An optimal tree stembucking systemwas developed for central Appalachian hardwood species using three-dimensional (3D) modeling techniques. ActiveX Data Objects were implemented via MS Visual C++/OpenGL to manipulate tree data which were supported by a backend relational data model with five data entity types for stems, grades and prices, logs, defects, and stem shapes...

  8. A Review of Current Clinical Applications of Three-Dimensional Printing in Spine Surgery

    PubMed Central

    Job, Alan Varkey; Chen, Jing; Baek, Jung Hwan

    2018-01-01

    Three-dimensional (3D) printing is a transformative technology with a potentially wide range of applications in the field of orthopaedic spine surgery. This article aims to review the current applications, limitations, and future developments of 3D printing technology in orthopaedic spine surgery. Current preoperative applications of 3D printing include construction of complex 3D anatomic models for improved visual understanding, preoperative surgical planning, and surgical simulations for resident education. Intraoperatively, 3D printers have been successfully used in surgical guidance systems and in the creation of patient specific implantable devices. Furthermore, 3D printing is revolutionizing the field of regenerative medicine and tissue engineering, allowing construction of biocompatible scaffolds suitable for cell growth and vasculature. Advances in printing technology and evidence of positive clinical outcomes are needed before there is an expansion of 3D printing applied to the clinical setting. PMID:29503698

  9. A Review of Current Clinical Applications of Three-Dimensional Printing in Spine Surgery.

    PubMed

    Cho, Woojin; Job, Alan Varkey; Chen, Jing; Baek, Jung Hwan

    2018-02-01

    Three-dimensional (3D) printing is a transformative technology with a potentially wide range of applications in the field of orthopaedic spine surgery. This article aims to review the current applications, limitations, and future developments of 3D printing technology in orthopaedic spine surgery. Current preoperative applications of 3D printing include construction of complex 3D anatomic models for improved visual understanding, preoperative surgical planning, and surgical simulations for resident education. Intraoperatively, 3D printers have been successfully used in surgical guidance systems and in the creation of patient specific implantable devices. Furthermore, 3D printing is revolutionizing the field of regenerative medicine and tissue engineering, allowing construction of biocompatible scaffolds suitable for cell growth and vasculature. Advances in printing technology and evidence of positive clinical outcomes are needed before there is an expansion of 3D printing applied to the clinical setting.

  10. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  12. 3-D Topo Surface Visualization of Acid-Base Species Distributions: Corner Buttes, Corner Pits, Curving Ridge Crests, and Dilution Plains

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul

    2017-01-01

    Species TOPOS is a free software package for generating three-dimensional (3-D) topographic surfaces ("topos") for acid-base equilibrium studies. This upgrade adds 3-D species distribution topos to earlier surfaces that showed pH and buffer capacity behavior during titration and dilution procedures. It constructs topos by plotting…

  13. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; Thomas, J. D.

    2000-01-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  14. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography.

    PubMed

    Qin, J X; Shiota, T; Thomas, J D

    2000-11-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  15. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  16. Three-dimensional visual guidance improves the accuracy of calculating right ventricular volume with two-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Dorosz, Jennifer L.; Bolson, Edward L.; Waiss, Mary S.; Sheehan, Florence H.

    2003-01-01

    Three-dimensional guidance programs have been shown to increase the reproducibility of 2-dimensional (2D) left ventricular volume calculations, but these systems have not been tested in 2D measurements of the right ventricle. Using magnetic fields to identify the probe location, we developed a new 3-dimensional guidance system that displays the line of intersection, the plane of intersection, and the numeric angle of intersection between the current image plane and previously saved scout views. When used by both an experienced and an inexperienced sonographer, this guidance system increases the accuracy of the 2D right ventricular volume measurements using a monoplane pyramidal model. Furthermore, a reconstruction of the right ventricle, with a computed volume similar to the calculated 2D volume, can be displayed quickly by tracing a few anatomic structures on 2D scans.

  17. Quantitative volumetric Raman imaging of three dimensional cell cultures

    NASA Astrophysics Data System (ADS)

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-03-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  18. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. © 2014 American Association of Anatomists.

  19. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  20. Evaluation of sequence alignments and oligonucleotide probes with respect to three-dimensional structure of ribosomal RNA using ARB software package

    PubMed Central

    Kumar, Yadhu; Westram, Ralf; Kipfer, Peter; Meier, Harald; Ludwig, Wolfgang

    2006-01-01

    Background Availability of high-resolution RNA crystal structures for the 30S and 50S ribosomal subunits and the subsequent validation of comparative secondary structure models have prompted the biologists to use three-dimensional structure of ribosomal RNA (rRNA) for evaluating sequence alignments of rRNA genes. Furthermore, the secondary and tertiary structural features of rRNA are highly useful and successfully employed in designing rRNA targeted oligonucleotide probes intended for in situ hybridization experiments. RNA3D, a program to combine sequence alignment information with three-dimensional structure of rRNA was developed. Integration into ARB software package, which is used extensively by the scientific community for phylogenetic analysis and molecular probe designing, has substantially extended the functionality of ARB software suite with 3D environment. Results Three-dimensional structure of rRNA is visualized in OpenGL 3D environment with the abilities to change the display and overlay information onto the molecule, dynamically. Phylogenetic information derived from the multiple sequence alignments can be overlaid onto the molecule structure in a real time. Superimposition of both statistical and non-statistical sequence associated information onto the rRNA 3D structure can be done using customizable color scheme, which is also applied to a textual sequence alignment for reference. Oligonucleotide probes designed by ARB probe design tools can be mapped onto the 3D structure along with the probe accessibility models for evaluation with respect to secondary and tertiary structural conformations of rRNA. Conclusion Visualization of three-dimensional structure of rRNA in an intuitive display provides the biologists with the greater possibilities to carry out structure based phylogenetic analysis. Coupled with secondary structure models of rRNA, RNA3D program aids in validating the sequence alignments of rRNA genes and evaluating probe target sites. Superimposition of the information derived from the multiple sequence alignment onto the molecule dynamically allows the researchers to observe any sequence inherited characteristics (phylogenetic information) in real-time environment. The extended ARB software package is made freely available for the scientific community via . PMID:16672074

  1. ’MBTI3D’ (A Three-Dimensional Interpretation)

    DTIC Science & Technology

    1993-04-01

    preferential relationship --individuals are pigeonholed into personality types based solely on preference inclination and with disregard for actual preference...values. Consequently, individual and group relationships , as represented by the MBTI, are not integrated the way most organizations perceive. The MBTI’s...somewhat cerebral definition and its two-dimensional visual display present a limited portrayal of real life multi-dimensional relationships . This

  2. bioWeb3D: an online webGL 3D data visualisation tool.

    PubMed

    Pettit, Jean-Baptiste; Marioni, John C

    2013-06-07

    Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets.

  3. Visualizing phylogenetic tree landscapes.

    PubMed

    Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

    2017-02-02

    Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.

  4. A Dynamic Three-Dimensional Network Visualization Program for Integration into CyberCIEGE and Other Network Visualization Scenarios

    DTIC Science & Technology

    2007-06-01

    information flow involved in network attacks. This kind of information can be invaluable in learning how to best setup and defend computer networks...administrators, and those interested in learning about securing networks a way to conceptualize this complex system of computing. NTAV3D will provide a three...teaching with visual and other components can make learning more effective” (Baxley et al, 2006). A hyperbox (Alpern and Carter, 1991) is

  5. 3-D Teaching Models for All

    ERIC Educational Resources Information Center

    Bradley, Joan; Farland-Smith, Donna

    2010-01-01

    Allowing a student to "see" through touch what other students see through a microscope can be a challenging task. Therefore, author Joan Bradley created three-dimensional (3-D) models with one student's visual impairment in mind. They are meant to benefit all students and can be used to teach common high school biology topics, including the…

  6. Who Benefits from Learning with 3D Models?: The Case of Spatial Ability

    ERIC Educational Resources Information Center

    Huk, T.

    2006-01-01

    Empirical studies that focus on the impact of three-dimensional (3D) visualizations on learning are to date rare and inconsistent. According to the ability-as-enhancer hypothesis, high spatial ability learners should benefit particularly as they have enough cognitive capacity left for mental model construction. In contrast, the…

  7. A novel three-dimensional tool for teaching human neuroanatomy.

    PubMed

    Estevez, Maureen E; Lindgren, Kristen A; Bergethon, Peter R

    2010-01-01

    Three-dimensional (3D) visualization of neuroanatomy can be challenging for medical students. This knowledge is essential in order for students to correlate cross-sectional neuroanatomy and whole brain specimens within neuroscience curricula and to interpret clinical and radiological information as clinicians or researchers. This study implemented and evaluated a new tool for teaching 3D neuroanatomy to first-year medical students at Boston University School of Medicine. Students were randomized into experimental and control classrooms. All students were taught neuroanatomy according to traditional 2D methods. Then, during laboratory review, the experimental group constructed 3D color-coded physical models of the periventricular structures, while the control group re-examined 2D brain cross-sections. At the end of the course, 2D and 3D spatial relationships of the brain and preferred learning styles were assessed in both groups. The overall quiz scores for the experimental group were significantly higher than the control group (t(85) = 2.02, P < 0.05). However, when the questions were divided into those requiring either 2D or 3D visualization, only the scores for the 3D questions were significantly higher in the experimental group (F₁(,)₈₅ = 5.48, P = 0.02). When surveyed, 84% of students recommended repeating the 3D activity for future laboratories, and this preference was equally distributed across preferred learning styles (χ² = 0.14, n.s.). Our results suggest that our 3D physical modeling activity is an effective method for teaching spatial relationships of brain anatomy and will better prepare students for visualization of 3D neuroanatomy, a skill essential for higher education in neuroscience, neurology, and neurosurgery. Copyright © 2010 American Association of Anatomists.

  8. A Novel Three-Dimensional Tool for Teaching Human Neuroanatomy

    PubMed Central

    Estevez, Maureen E.; Lindgren, Kristen A.; Bergethon, Peter R.

    2011-01-01

    Three-dimensional (3-D) visualization of neuroanatomy can be challenging for medical students. This knowledge is essential in order for students to correlate cross-sectional neuroanatomy and whole brain specimens within neuroscience curricula and to interpret clinical and radiological information as clinicians or researchers. This study implemented and evaluated a new tool for teaching 3-D neuroanatomy to first-year medical students at Boston University School of Medicine. Students were randomized into experimental and control classrooms. All students were taught neuroanatomy according to traditional 2-D methods. Then, during laboratory review, the experimental group constructed 3-D color-coded physical models of the periventricular structures, while the control group re-examined 2-D brain cross-sections. At the end of the course, 2-D and 3-D spatial relationships of the brain and preferred learning styles were assessed in both groups. The overall quiz scores for the experimental group were significantly higher than the control group (t(85) = 2.02, P < 0.05). However, when the questions were divided into those requiring either 2-D or 3-D visualization, only the scores for the 3-D questions were significantly higher in the experimental group (F1,85 = 5.48, P = 0.02). When surveyed, 84% of students recommended repeating the 3-D activity for future laboratories, and this preference was equally distributed across preferred learning styles (χ2 = 0.14, n.s.). Our results suggest that our 3-D physical modeling activity is an effective method for teaching spatial relationships of brain anatomy and will better prepare students for visualization of 3-D neuroanatomy, a skill essential for higher education in neuroscience, neurology, and neurosurgery. PMID:20939033

  9. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  10. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    PubMed

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  11. Instructor-Led Approach to Integrating an Augmented Reality Sandbox into a Large-Enrollment Introductory Geoscience Course for Nonmajors Produces No Gains

    ERIC Educational Resources Information Center

    Giorgis, Scott; Mahlen, Nancy; Anne, Kirk

    2017-01-01

    The augmented reality (AR) sandbox bridges the gap between two-dimensional (2D) and three-dimensional (3D) visualization by projecting a digital topographic map onto a sandbox landscape. As the landscape is altered, the map dynamically adjusts, providing an opportunity to discover how to read topographic maps. We tested the hypothesis that the AR…

  12. [Research on Three-dimensional Medical Image Reconstruction and Interaction Based on HTML5 and Visualization Toolkit].

    PubMed

    Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang

    2015-04-01

    Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.

  13. Infusion of a Gaming Paradigm into Computer-Aided Engineering Design Tools

    DTIC Science & Technology

    2012-05-03

    Virtual Test Bed (VTB), and the gaming tool, Unity3D . This hybrid gaming environment coupled a three-dimensional (3D) multibody vehicle system model...from Google Earth to the 3D visual front-end fabricated around Unity3D . The hybrid environment was sufficiently developed to support analyses of the...ndFr Cti3r4 G’OjrdFr ctior-2 The VTB simulation of the vehicle dynamics ran concurrently with and interacted with the gaming engine, Unity3D which

  14. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    PubMed Central

    2010-01-01

    Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623

  15. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  16. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  17. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  18. Novel Visualization Approaches in Environmental Mineralogy

    NASA Astrophysics Data System (ADS)

    Anderson, C. D.; Lopano, C. L.; Hummer, D. R.; Heaney, P. J.; Post, J. E.; Kubicki, J. D.; Sofo, J. O.

    2006-05-01

    Communicating the complexities of atomic scale reactions between minerals and fluids is fraught with intrinsic challenges. For example, an increasing number of techniques are now available for the interrogation of dynamical processes at the mineral-fluid interface. However, the time-dependent behavior of atomic interactions between a solid and a liquid is often not adequately captured by two-dimensional line drawings or images. At the same time, the necessity for describing these reactions to general audiences is growing more urgent, as funding agencies are amplifying their encouragement to scientists to reach across disciplines and to justify their studies to public audiences. To overcome the shortcomings of traditional graphical representations, the Center for Environmental Kinetics Analysis is creating three-dimensional visualizations of experimental and simulated mineral reactions. These visualizations are then displayed on a stereo 3D projection system called the GeoWall. Made possible (and affordable) by recent improvements in computer and data projector technology, the GeoWall system uses a combination of computer software and hardware, polarizing filters and polarizing glasses, to present visualizations in true 3D. The three-dimensional views greatly improve comprehension of complex multidimensional data, and animations of time series foster better understanding of the underlying processes. The visualizations also offer an effective means to communicate the complexities of environmental mineralogy to colleagues, students and the public. Here we present three different kinds of datasets that demonstrate the effectiveness of the GeoWall in clarifying complex environmental reactions at the atomic scale. First, a time-resolved series of diffraction patterns obtained during the hydrothermal synthesis of metal oxide phases from precursor solutions can be viewed as a surface with interactive controls for peak scaling and color mapping. Second, the results of Rietveld analysis of cation exchange reactions in Mn oxides has provided three-dimensional difference Fourier maps. When stitched together in a temporal series, these offer an animated view of changes in atomic configurations during the process of exchange. Finally, molecular dynamical simulations are visualized as three-dimensional reactions between vibrating atoms in both the solid and the aqueous phases.

  19. An image-guided planning system for endosseous oral implants.

    PubMed

    Verstreken, K; Van Cleynenbreugel, J; Martens, K; Marchal, G; van Steenberghe, D; Suetens, P

    1998-10-01

    A preoperative planning system for oral implant surgery was developed which takes as input computed tomographies (CT's) of the jaws. Two-dimensional (2-D) reslices of these axial CT slices orthogonal to a curve following the jaw arch are computed and shown together with three-dimensional (3-D) surface rendered models of the bone and computer-aided design (CAD)-like implant models. A technique is developed for scanning and visualizing an eventual existing removable prosthesis together with the bone structures. Evaluation of the planning done with the system shows a difference between 2-D and 3-D planning methods. Validation studies measure the benefits of the 3-D approach by comparing plans made in 2-D mode only with those further adjusted using the full 3-D visualization capabilities of the system. The benefits of a 3-D approach are then evident where a prosthesis is involved in the planning. For the majority of the patients, clinically important adjustments and optimizations to the 2-D plans are made once the 3-D visualization is enabled, effectively resulting in a better plan. The alterations are related to bone quality and quantity (p < 0.05), biomechanics (p < 0.005), and esthetics (p < 0.005), and are so obvious that the 3-D plan stands out clearly (p < 0.005). The improvements often avoid complications such as mandibular nerve damage, sinus perforations, fenestrations, or dehiscences.

  20. Incorporating 3-dimensional models in online articles.

    PubMed

    Cevidanes, Lucia H S; Ruellas, Antonio C O; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-05-01

    The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  1. Building simple multiscale visualizations of outcrop geology using virtual reality modeling language (VRML)

    NASA Astrophysics Data System (ADS)

    Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming

    2005-08-01

    Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.

  2. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  3. [Application of hepatic segment resection combined with rigid choledochoscope in the treatment of complex hepatolithiasis guided by three-dimensional visualization technology].

    PubMed

    Xiang, Nan; Fang, Chihua

    2015-05-01

    To study the value of hepatic segment resection combined with rigid choledochoscope by the three-dimensional (3D) visualization technology in the diagnosis and treatment of complex hepatolithiasis. Enhance computed tomography (CT) data of 46 patients with complex hepatolithiasis who were admitted to the Zhujiang Hospital of the Southern Medical University from July 2010 to June 2014 were collected.All of the CT data were imported into the medical image three-dimensional visualization system (MI-3DVS) for 3D reconstruction and individual 3D types. The optimal scope of liver resection and the remnant liver volume were determined according to the individualized liver segments which were made via the distribution and variation of hepatic vein and portal vein, the distribution of bile duct stones and stricture of the bile duct, which provided guidance for intraoperative hepatic lobectomy and rigid choledochoscope for the remnant calculus lithotripsy. Outcomes of individual 3D types: 10 cases of type I, 11 cases of IIa, 23 cases of IIb, 2 cases of IIc, 19 cases coexisted with history of biliary surgery. The variation of hepatic artery was appeared 6 cases. The variation of portal vein was appeared 8 cases. The remaining liver volume for virtual hepatic lobectomy controlled more than 50%. Eighteen cases underwent left lateral hepatectomy, 8 cases underwent left liver resection, 8 cases underwent right posterior lobe of liver resection, 4 cases underwent the right hepatic resection, 4 cases underwent IV segment liver resection, 2 cases underwent right anterior lobe of liver resection, 2 cases underwent left lateral hepatectomy combined with right posterior lobe of liver resection, 26 cases underwent targeting treatment of rapid choledochoscope and preumatic lithotripsy. The actual surgical procedure was consistent with the preoperative surgical planning. There was no postoperative residual liver ischemia,congestion, liver failure occurred in this study. The intraoperative calculus clearance rate was 91.3% (42/46) because 4 cases of postoperatively residual calculi were not suitable for one stage management due to suppurative cholangitis but removed calculus successfully with rigid choledochoscope through T tube fistula. Hepatic segment resection combined with rigid choledochoscope under the guidance of three-dimensional visualization technology achieves accurate preoperative diagnosis and higher complete stone clearance rate of complicated hepatolithiasis.

  4. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  5. The Abilities of Understanding Spatial Relations, Spatial Orientation, and Spatial Visualization Affect 3D Product Design Performance: Using Carton Box Design as an Example

    ERIC Educational Resources Information Center

    Liao, Kun-Hsi

    2017-01-01

    Three-dimensional (3D) product design is an essential ability that students of subjects related to product design must acquire. The factors that affect designers' performance in 3D design are numerous, one of which is spatial abilities. Studies have reported that spatial abilities can be used to effectively predict people's performance in…

  6. Spatio-temporal brain activity related to rotation method during a mental rotation task of three-dimensional objects: an MEG study.

    PubMed

    Kawamichi, Hiroaki; Kikuchi, Yoshiaki; Ueno, Shoogo

    2007-09-01

    During mental rotation tasks, subjects perform mental simulation to solve tasks. However, detailed neural mechanisms underlying mental rotation of three-dimensional (3D) objects, particularly, whether higher motor areas related to mental simulation are activated, remain unknown. We hypothesized that environmental monitoring-a process based on environmental information and is included in motor execution-is as a key factor affecting the utilization of higher motor areas. Therefore, using magnetoencephalography (MEG), we measured spatio-temporal brain activities during two types (two-dimensional (2D) and 3D rotation tasks) of mental rotation of 3D objects. Only the 3D rotation tasks required subjects to mentally rotate objects in a depth plane with visualization of hidden parts of the visual stimuli by acquiring and retrieving 3D information. In cases showing significant differences in the averaged activities at 100-ms intervals between the two rotations, the activities were located in the right dorsal premotor (PMd) at approximately 500 ms. In these cases, averaged activities during 3D rotation were greater than those during 2D rotation, implying that the right PMd activities are related to environmental monitoring. During 3D rotation, higher activities were observed from 200 to 300 ms in the left PMd and from 400 to 700 ms in the right PMd. It is considered that the left PMd is related to primary motor control, whereas the right PMd plays a supplementary role during mental simulation. Further, during 3D rotation, late higher activities related to mental simulation are observed in the right superior parietal lobule (SPL), which is connected to PMd.

  7. PyMOL mControl: Manipulating Molecular Visualization with Mobile Devices

    ERIC Educational Resources Information Center

    Lam, Wendy W. T.; Siu, Shirley W. I.

    2017-01-01

    Viewing and manipulating three-dimensional (3D) structures in molecular graphics software are essential tasks for researchers and students to understand the functions of molecules. Currently, the way to manipulate a 3D molecular object is mainly based on mouse-and-keyboard control that is usually difficult and tedious to learn. While gesture-based…

  8. A Head in Virtual Reality: Development of A Dynamic Head and Neck Model

    ERIC Educational Resources Information Center

    Nguyen, Ngan; Wilson, Timothy D.

    2009-01-01

    Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…

  9. The Effectiveness of Physical Models in Teaching Anatomy: A Meta-Analysis of Comparative Studies

    ERIC Educational Resources Information Center

    Yammine, Kaissar; Violato, Claudio

    2016-01-01

    There are various educational methods used in anatomy teaching. While three dimensional (3D) visualization technologies are gaining ground due to their ever-increasing realism, reports investigating physical models as a low-cost 3D traditional method are still the subject of considerable interest. The aim of this meta-analysis is to quantitatively…

  10. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models

    PubMed Central

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  11. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision.

    PubMed

    Van Dromme, Ilse C; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-04-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.

  12. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  13. Three-Dimensional Display Technologies for Anatomical Education: A Literature Review

    NASA Astrophysics Data System (ADS)

    Hackett, Matthew; Proctor, Michael

    2016-08-01

    Anatomy is a foundational component of biological sciences and medical education and is important for a variety of clinical tasks. To augment current curriculum and improve students' spatial knowledge of anatomy, many educators, anatomists, and researchers use three-dimensional (3D) visualization technologies. This article reviews 3D display technologies and their associated assessments for anatomical education. In the first segment, the review covers the general function of displays employing 3D techniques. The second segment of the review highlights the use and assessment of 3D technology in anatomical education, focusing on factors such as knowledge gains, student perceptions, and cognitive load. The review found 32 articles on the use of 3D displays in anatomical education and another 38 articles on the assessment of 3D displays. The review shows that the majority (74 %) of studies indicate that the use of 3D is beneficial for many tasks in anatomical education, and that student perceptions are positive toward the technology.

  14. Surgical planning for radical prostatectomies using three-dimensional visualization and a virtual reality display system

    NASA Astrophysics Data System (ADS)

    Kay, Paul A.; Robb, Richard A.; King, Bernard F.; Myers, R. P.; Camp, Jon J.

    1995-04-01

    Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.

  15. Nerves of Steel: a Low-Cost Method for 3D Printing the Cranial Nerves.

    PubMed

    Javan, Ramin; Davidson, Duncan; Javan, Afshin

    2017-10-01

    Steady-state free precession (SSFP) magnetic resonance imaging (MRI) can demonstrate details down to the cranial nerve (CN) level. High-resolution three-dimensional (3D) visualization can now quickly be performed at the workstation. However, we are still limited by visualization on flat screens. The emerging technologies in rapid prototyping or 3D printing overcome this limitation. It comprises a variety of automated manufacturing techniques, which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. The complex neuroanatomy of the CNs may be better understood and depicted by the use of highly customizable advanced 3D printed models. In this technical note, after manually perfecting the segmentation of each CN and brain stem on each SSFP-MRI image, initial 3D reconstruction was performed. The bony skull base was also reconstructed from computed tomography (CT) data. Autodesk 3D Studio Max, available through freeware student/educator license, was used to three-dimensionally trace the 3D reconstructed CNs in order to create smooth graphically designed CNs and to assure proper fitting of the CNs into their respective neural foramina and fissures. This model was then 3D printed with polyamide through a commercial online service. Two different methods are discussed for the key segmentation and 3D reconstruction steps, by either using professional commercial software, i.e., Materialise Mimics, or utilizing a combination of the widely available software Adobe Photoshop, as well as a freeware software, OsiriX Lite.

  16. Supernova Remnant in 3-D

    NASA Image and Video Library

    2009-01-06

    For the first time, a multiwavelength three-dimensional reconstruction of a supernova remnant has been created. This visualization of Cassiopeia A, or Cas A, the result of an explosion approximately 330 years ago, uses data from several NASA telescopes.

  17. Three dimensional fabric evolution of sheared sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Alsidqi; Alshibli, Khalid

    2012-10-24

    Granular particles undergo translation and rolling when they are sheared. This paper presents a three-dimensional (3D) experimental assessment of fabric evolution of sheared sand at the particle level. F-75 Ottawa sand specimen was tested under an axisymmetric triaxial loading condition. It measured 9.5 mm in diameter and 20 mm in height. The quantitative evaluation was conducted by analyzing 3D high-resolution x-ray synchrotron micro-tomography images of the specimen at eight axial strain levels. The analyses included visualization of particle translation and rotation, and quantification of fabric orientation as shearing continued. Representative individual particles were successfully tracked and visualized to assess themore » mode of interaction between them. This paper discusses fabric evolution and compares the evolution of particles within and outside the shear band as shearing continues. Changes in particle orientation distributions are presented using fabric histograms and fabric tensor.« less

  18. EpitopeViewer: a Java application for the visualization and analysis of immune epitopes in the Immune Epitope Database and Analysis Resource (IEDB).

    PubMed

    Beaver, John E; Bourne, Philip E; Ponomarenko, Julia V

    2007-02-21

    Structural information about epitopes, particularly the three-dimensional (3D) structures of antigens in complex with immune receptors, presents a valuable source of data for immunology. This information is available in the Protein Data Bank (PDB) and provided in curated form by the Immune Epitope Database and Analysis Resource (IEDB). With continued growth in these data and the importance in understanding molecular level interactions of immunological interest there is a need for new specialized molecular visualization and analysis tools. The EpitopeViewer is a platform-independent Java application for the visualization of the three-dimensional structure and sequence of epitopes and analyses of their interactions with antigen-specific receptors of the immune system (antibodies, T cell receptors and MHC molecules). The viewer renders both 3D views and two-dimensional plots of intermolecular interactions between the antigen and receptor(s) by reading curated data from the IEDB and/or calculated on-the-fly from atom coordinates from the PDB. The 3D views and associated interactions can be saved for future use and publication. The EpitopeViewer can be accessed from the IEDB Web site http://www.immuneepitope.org through the quick link 'Browse Records by 3D Structure.' The EpitopeViewer is designed and been tested for use by immunologists with little or no training in molecular graphics. The EpitopeViewer can be launched from most popular Web browsers without user intervention. A Java Runtime Environment (RJE) 1.4.2 or higher is required.

  19. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  20. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  1. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  2. Three- and four-dimensional reconstruction of intra-cardiac anatomy from two-dimensional magnetic resonance images.

    PubMed

    Miquel, M E; Hill, D L G; Baker, E J; Qureshi, S A; Simon, R D B; Keevil, S F; Razavi, R S

    2003-06-01

    The present study was designed to evaluate the feasibility and clinical usefulness of three-dimensional (3D) reconstruction of intra-cardiac anatomy from a series of two-dimensional (2D) MR images using commercially available software. Sixteen patients (eight with structurally normal hearts but due to have catheter radio-frequency ablation of atrial tachyarrhythmias and eight with atrial septal defects (ASD) due for trans-catheter closure) and two volunteers were imaged at 1T. For each patient, a series of ECG-triggered images (5 mm thick slices, 2-3 mm apart) were acquired during breath holding. Depending on image quality, T1- or T2-weighted spin-echo images or gradient-echo cine images were used. The 3D reconstruction was performed off-line: the blood pools within cardiac chambers and great vessels were semi-automatically segmented, their outer surface was extracted using a marching cube algorithm and rendered. Intra- and inter-observer variability, effect of breath-hold position and differences between pulse sequences were assessed by imaging a volunteer. The 3D reconstructions were assessed by three cardiologists and compared with the 2D MR images and with 2D and 3D trans-esophagal and intra-cardiac echocardiography obtained during interventions. In every case, an anatomically detailed 3D volume was obtained. In the two patients where a 3 mm interval between slices was used, the resolution was not as good but it was still possible to visualize all the major anatomical structures. Spin-echo images lead to reconstructions more detailed than those obtained from gradient-echo images. However, gradient-echo images are easier to segment due to their greater contrast. Furthermore, because images were acquired at least at ten points in the cardiac cycles for every slice it was possible to reconstruct a cine loop and, for example, to visualize the evolution of the size and margins of the ASD during the cardiac cycle. 3D reconstruction proved to be an effective way to assess the relationship between the different parts of the cardiac anatomy. The technique was useful in planning interventions in these patients.

  3. Volumetric imaging of supersonic boundary layers using filtered Rayleigh scattering background suppression

    NASA Technical Reports Server (NTRS)

    Forkey, Joseph N.; Lempert, Walter R.; Bogdonoff, Seymour M.; Miles, Richard B.; Russell, G.

    1995-01-01

    We demonstrate the use of Filtererd Rayleigh Scattering and a 3D reconstruction technique to interrogate the highly three dimensional flow field inside of a supersonic inlet model. A 3 inch by 3 inch by 2.5 inch volume is reconstructed yielding 3D visualizations of the crossing shock waves and of the boundary layer. In this paper we discuss the details of the techniques used, and present the reconstructured 3D images.

  4. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery

    PubMed Central

    Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-01

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope’s field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a “perfect” visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon’s armamentarium. PMID:29581920

  5. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery.

    PubMed

    Beez, Thomas; Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-24

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope's field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a "perfect" visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon's armamentarium.

  6. 3DProIN: Protein-Protein Interaction Networks and Structure Visualization.

    PubMed

    Li, Hui; Liu, Chunmei

    2014-06-14

    3DProIN is a computational tool to visualize protein-protein interaction networks in both two dimensional (2D) and three dimensional (3D) view. It models protein-protein interactions in a graph and explores the biologically relevant features of the tertiary structures of each protein in the network. Properties such as color, shape and name of each node (protein) of the network can be edited in either 2D or 3D views. 3DProIN is implemented using 3D Java and C programming languages. The internet crawl technique is also used to parse dynamically grasped protein interactions from protein data bank (PDB). It is a java applet component that is embedded in the web page and it can be used on different platforms including Linux, Mac and Window using web browsers such as Firefox, Internet Explorer, Chrome and Safari. It also was converted into a mac app and submitted to the App store as a free app. Mac users can also download the app from our website. 3DProIN is available for academic research at http://bicompute.appspot.com.

  7. Visualizing morphogenesis in transgenic zebrafish embryos using BODIPY TR methyl ester dye as a vital counterstain for GFP.

    PubMed

    Cooper, Mark S; Szeto, Daniel P; Sommers-Herivel, Greg; Topczewski, Jacek; Solnica-Krezel, Lila; Kang, Hee-Chol; Johnson, Iain; Kimelman, David

    2005-02-01

    Green fluorescent protein (GFP) technology is rapidly advancing the study of morphogenesis, by allowing researchers to specifically focus on a subset of labeled cells within the living embryo. However, when imaging GFP-labeled cells using confocal microscopy, it is often essential to simultaneously visualize all of the cells in the embryo using dual-channel fluorescence to provide an embryological context for the cells expressing GFP. Although various counterstains are available, part of their fluorescence overlaps with the GFP emission spectra, making it difficult to clearly identify the cells expressing GFP. In this study, we report that a new fluorophore, BODIPY TR methyl ester dye, serves as a versatile vital counterstain for visualizing the cellular dynamics of morphogenesis within living GFP transgenic zebrafish embryos. The fluorescence of this photostable synthetic dye is spectrally separate from GFP fluorescence, allowing dual-channel, three-dimensional (3D) and four-dimensional (4D) confocal image data sets of living specimens to be easily acquired. These image data sets can be rendered subsequently into uniquely informative 3D and 4D visualizations using computer-assisted visualization software. We discuss a variety of immediate and potential applications of BODIPY TR methyl ester dye as a vital visualization counterstain for GFP in transgenic zebrafish embryos. Copyright 2004 Wiley-Liss, Inc.

  8. Training Performance of Laparoscopic Surgery in Two- and Three-Dimensional Displays.

    PubMed

    Lin, Chiuhsiang Joe; Cheng, Chih-Feng; Chen, Hung-Jen; Wu, Kuan-Ying

    2017-04-01

    This research investigated differences in the effects of a state-of-art stereoscopic 3-dimensional (3D) display and a traditional 2-dimensional (2D) display in simulated laparoscopic surgery over a longer duration than in previous publications and studied the learning effects of the 2 display systems on novices. A randomized experiment with 2 factors, image dimensions and image sequence, was conducted to investigate differences in the mean movement time, the mean error frequency, NASA-TLX cognitive workload, and visual fatigue in pegboard and circle-tracing tasks. The stereoscopic 3D display had advantages in mean movement time ( P < .001 and P = .002) and mean error frequency ( P = .010 and P = .008) in both the tasks. There were no significant differences in the objective visual fatigue ( P = .729 and P = .422) and in the NASA-TLX ( P = .605 and P = .937) cognitive workload between the 3D and the 2D displays on both the tasks. For the learning effect, participants who used the stereoscopic 3D display first had shorter mean movement time in the 2D display environment on both the pegboard ( P = .011) and the circle-tracing ( P = .017) tasks. The results of this research suggest that a stereoscopic system would not result in higher objective visual fatigue and cognitive workload than a 2D system, and it might reduce the performance time and increase the precision of surgical operations. In addition, learning efficiency of the stereoscopic system on the novices in this study demonstrated its value for training and education in laparoscopic surgery.

  9. Three-dimensional visualization of gammaherpesvirus life cycle in host cells by electron tomography.

    PubMed

    Peng, Li; Ryazantsev, Sergey; Sun, Ren; Zhou, Z Hong

    2010-01-13

    Gammaherpesviruses are etiologically associated with human tumors. A three-dimensional (3D) examination of their life cycle in the host is lacking, significantly limiting our understanding of the structural and molecular basis of virus-host interactions. Here, we report the first 3D visualization of key stages of the murine gammaherpesvirus 68 life cycle in NIH 3T3 cells, including viral attachment, entry, assembly, and egress, by dual-axis electron tomography. In particular, we revealed the transient processes of incoming capsids injecting viral DNA through nuclear pore complexes and nascent DNA being packaged into progeny capsids in vivo as a spool coaxial with the putative portal vertex. We discovered that intranuclear invagination of both nuclear membranes is involved in nuclear egress of herpesvirus capsids. Taken together, our results provide the structural basis for a detailed mechanistic description of gammaherpesvirus life cycle and also demonstrate the advantage of electron tomography in dissecting complex cellular processes of viral infection.

  10. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  11. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  12. Design and Development of a Framework Based on Ogc Web Services for the Visualization of Three Dimensional Large-Scale Geospatial Data Over the Web

    NASA Astrophysics Data System (ADS)

    Roccatello, E.; Nozzi, A.; Rumor, M.

    2013-05-01

    This paper illustrates the key concepts behind the design and the development of a framework, based on OGC services, capable to visualize 3D large scale geospatial data streamed over the web. WebGISes are traditionally bounded to a bi-dimensional simplified representation of the reality and though they are successfully addressing the lack of flexibility and simplicity of traditional desktop clients, a lot of effort is still needed to reach desktop GIS features, like 3D visualization. The motivations behind this work lay in the widespread availability of OGC Web Services inside government organizations and in the technology support to HTML 5 and WebGL standard of the web browsers. This delivers an improved user experience, similar to desktop applications, therefore allowing to augment traditional WebGIS features with a 3D visualization framework. This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.

  13. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease.

    PubMed

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.

  14. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    PubMed Central

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003

  15. A quantitative evaluation of the three dimensional reconstruction of patients' coronary arteries.

    PubMed

    Klein, J L; Hoff, J G; Peifer, J W; Folks, R; Cooke, C D; King, S B; Garcia, E V

    1998-04-01

    Through extensive training and experience angiographers learn to mentally reconstruct the three dimensional (3D) relationships of the coronary arterial branches. Graphic computer technology can assist angiographers to more quickly visualize the coronary 3D structure from limited initial views and then help to determine additional helpful views by predicting subsequent angiograms before they are obtained. A new computer method for facilitating 3D reconstruction and visualization of human coronary arteries was evaluated by reconstructing biplane left coronary angiograms from 30 patients. The accuracy of the reconstruction was assessed in two ways: 1) by comparing the vessel's centerlines of the actual angiograms with the centerlines of a 2D projection of the 3D model projected into the exact angle of the actual angiogram; and 2) by comparing two 3D models generated by different simultaneous pairs on angiograms. The inter- and intraobserver variability of reconstruction were evaluated by mathematically comparing the 3D model centerlines of repeated reconstructions. The average absolute corrected displacement of 14,662 vessel centerline points in 2D from 30 patients was 1.64 +/- 2.26 mm. The average corrected absolute displacement of 3D models generated from different biplane pairs was 7.08 +/- 3.21 mm. The intraobserver variability of absolute 3D corrected displacement was 5.22 +/- 3.39 mm. The interobserver variability was 6.6 +/- 3.1 mm. The centerline analyses show that the reconstruction algorithm is mathematically accurate and reproducible. The figures presented in this report put these measurement errors into clinical perspective showing that they yield an accurate representation of the clinically relevant information seen on the actual angiograms. These data show that this technique can be clinically useful by accurately displaying in three dimensions the complex relationships of the branches of the coronary arterial tree.

  16. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  17. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    NASA Astrophysics Data System (ADS)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodríguez, A. O.

    2006-09-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  18. Three-dimensional vision enhances task performance independently of the surgical method.

    PubMed

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  19. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less

  20. Chemozart: a web-based 3D molecular structure editor and visualizer platform.

    PubMed

    Mohebifar, Mohamad; Sajadi, Fatemehsadat

    2015-01-01

    Chemozart is a 3D Molecule editor and visualizer built on top of native web components. It offers an easy to access service, user-friendly graphical interface and modular design. It is a client centric web application which communicates with the server via a representational state transfer style web service. Both client-side and server-side application are written in JavaScript. A combination of JavaScript and HTML is used to draw three-dimensional structures of molecules. With the help of WebGL, three-dimensional visualization tool is provided. Using CSS3 and HTML5, a user-friendly interface is composed. More than 30 packages are used to compose this application which adds enough flexibility to it to be extended. Molecule structures can be drawn on all types of platforms and is compatible with mobile devices. No installation is required in order to use this application and it can be accessed through the internet. This application can be extended on both server-side and client-side by implementing modules in JavaScript. Molecular compounds are drawn on the HTML5 Canvas element using WebGL context. Chemozart is a chemical platform which is powerful, flexible, and easy to access. It provides an online web-based tool used for chemical visualization along with result oriented optimization for cloud based API (application programming interface). JavaScript libraries which allow creation of web pages containing interactive three-dimensional molecular structures has also been made available. The application has been released under Apache 2 License and is available from the project website https://chemozart.com.

  1. Using 3D LIF to Investigate and Improve Performance of a Multichamber Ozone Contactor

    EPA Science Inventory

    Three-dimensional laser-induced fluorescence (3DLIF) was applied to visualize and quantitatively analyze hydrodynamics and mixing in a multi-chamber ozone contactor, the most widely used design for water disinfection. The results suggested that the mixing was characterized by ext...

  2. Three-Dimensional Displays In The Future Flight Station

    NASA Astrophysics Data System (ADS)

    Bridges, Alan L.

    1984-10-01

    This review paper summarizes the development and applications of computer techniques for the representation of three-dimensional data in the future flight station. It covers the development of the Lockheed-NASA Advanced Concepts Flight Station (ACFS) research simulators. These simulators contain: A Pilot's Desk Flight Station (PDFS) with five 13- inch diagonal, color, cathode ray tubes on the main instrument panel; a computer-generated day and night visual system; a six-degree-of-freedom motion base; and a computer complex. This paper reviews current research, development, and evaluation of easily modifiable display systems and software requirements for three-dimensional displays that may be developed for the PDFS. This includes the analysis and development of a 3-D representation of the entire flight profile. This 3-D flight path, or "Highway-in-the-Sky", will utilize motion and perspective cues to tightly couple the human responses of the pilot to the aircraft control systems. The use of custom logic, e.g., graphics engines, may provide the processing power and architecture required for 3-D computer-generated imagery (CGI) or visual scene simulation (VSS). Diffraction or holographic head-up displays (HUDs) will also be integrated into the ACFS simulator to permit research on the requirements and use of these "out-the-window" projection systems. Future research may include the retrieval of high-resolution, perspective view terrain maps which could then be overlaid with current weather information or other selectable cultural features.

  3. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  4. Three Dimensional Characterization of Tin Crystallography and Cu6Sn5 Intermetallics in Solder Joints by Multiscale Tomography

    NASA Astrophysics Data System (ADS)

    Kirubanandham, A.; Lujan-Regalado, I.; Vallabhaneni, R.; Chawla, N.

    2016-11-01

    Decreasing pitch size in electronic packaging has resulted in a drastic decrease in solder volumes. The Sn grain crystallography and fraction of intermetallic compounds (IMCs) in small-scale solder joints evolve much differently at the smaller length scales. A cross-sectional study limits the morphological analysis of microstructural features to two dimensions. This study utilizes serial sectioning technique in conjunction with electron backscatter diffraction to investigate the crystallographic orientation of both Sn grains and Cu6Sn5 IMCs in Cu/Pure Sn/Cu solder joints in three dimensional (3D). Quantification of grain aspect ratio is affected by local cooling rate differences within the solder volume. Backscatter electron imaging and focused ion beam serial sectioning enabled the visualization of morphology of both nanosized Cu6Sn5 IMCs and the hollow hexagonal morphology type Cu6Sn5 IMCs in 3D. Quantification and visualization of microstructural features in 3D thus enable us to better understand the microstructure and deformation mechanics within these small scale solder joints.

  5. In memoriam: Fumio Okano, innovator of 3D display

    NASA Astrophysics Data System (ADS)

    Arai, Jun

    2014-06-01

    Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.

  6. Real-time three-dimensional transesophageal echocardiography in the assessment of mechanical prosthetic mitral valve ring thrombosis.

    PubMed

    Ozkan, Mehmet; Gürsoy, Ozan Mustafa; Astarcıoğlu, Mehmet Ali; Gündüz, Sabahattin; Cakal, Beytullah; Karakoyun, Süleyman; Kalçık, Macit; Kahveci, Gökhan; Duran, Nilüfer Ekşi; Yıldız, Mustafa; Cevik, Cihan

    2013-10-01

    Although 2-dimensional (2D) transesophageal echocardiography (TEE) is the gold standard for the diagnosis of prosthetic valve thrombosis, nonobstructive clots located on mitral valve rings can be missed. Real-time 3-dimensional (3D) TEE has incremental value in the visualization of mitral prosthesis. The aim of this study was to investigate the utility of real-time 3D TEE in the diagnosis of mitral prosthetic ring thrombosis. The clinical outcomes of these patients in relation to real-time 3D transesophageal echocardiographic findings were analyzed. Of 1,263 patients who underwent echocardiographic studies, 174 patients (37 men, 137 women) with mitral ring thrombosis detected by real-time 3D TEE constituted the main study population. Patients were followed prospectively on oral anticoagulation for 25 ± 7 months. Eighty-nine patients (51%) had thrombi that were missed on 2D TEE and depicted only on real-time 3D TEE. The remaining cases were partially visualized with 2D TEE but completely visualized with real-time 3D TEE. Thirty-seven patients (21%) had thromboembolism. The mean thickness of the ring thrombosis in patients with thromboembolism was greater than that in patients without thromboembolism (3.8 ± 0.9 vs 2.8 ± 0.7 mm, p <0.001). One hundred fifty-five patients (89%) underwent real-time 3D TEE during follow-up. There were no thrombi in 39 patients (25%); 45 (29%) had regression of thrombi, and there was no change in thrombus size in 68 patients (44%). Thrombus size increased in 3 patients (2%). Thrombosis was confirmed surgically and histopathologically in 12 patients (7%). In conclusion, real-time 3D TEE can detect prosthetic mitral ring thrombosis that could be missed on 2D TEE and cause thromboembolic events. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. 2-Dimensional graphene as a route for emergence of additional dimension nanomaterials.

    PubMed

    Patra, Santanu; Roy, Ekta; Tiwari, Ashutosh; Madhuri, Rashmi; Sharma, Prashant K

    2017-03-15

    Dimension has a different and impactful significance in the field of innovation, research and technologies. Starting from one-dimension, now, we all are moving towards 3-D visuals and try to do the things in this dimension. However, we still have some very innovative and widely applicable nanomaterials, which have tremendous potential in the form of 2-D only i.e. graphene. In this review, we have tried to incorporate the reported pathways used so far for modification of 2-D graphene sheets to make is three-dimensional. The modified graphene been applied in many fields like supercapacitors, sensors, catalysis, energy storage devices and many more. In addition, we have also incorporated the conversion of 2-D graphene to their various other dimensions like zero-, one- or three-dimensional nanostructures. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Virtual reality on the web: the potentials of different methodologies and visualization techniques for scientific research and medical education.

    PubMed

    Kling-Petersen, T; Pascher, R; Rydmark, M

    1999-01-01

    Academic and medical imaging are increasingly using computer based 3D reconstruction and/or visualization. Three-dimensional interactive models play a major role in areas such as preclinical medical education, clinical visualization and medical research. While 3D is comparably easy to do on a high end workstations, distribution and use of interactive 3D graphics necessitate the use of personal computers and the web. Several new techniques have been demonstrated providing interactive 3D via a web browser thereby allowing a limited version of VR to be experienced by a larger majority of students, medical practitioners and researchers. These techniques include QuickTimeVR2 (QTVR), VRML2, QuickDraw3D, OpenGL and Java3D. In order to test the usability of the different techniques, Mednet have initiated a number of projects designed to evaluate the potentials of 3D techniques for scientific reporting, clinical visualization and medical education. These include datasets created by manual tracing followed by triangulation, smoothing and 3D visualization, MRI or high-resolution laserscanning. Preliminary results indicate that both VRML and QTVR fulfills most of the requirements of web based, interactive 3D visualization, whereas QuickDraw3D is too limited. Presently, the JAVA 3D has not yet reached a level where in depth testing is possible. The use of high-resolution laserscanning is an important addition to 3D digitization.

  9. Applications of Three-Dimensional Printing in Surgery.

    PubMed

    Li, Chi; Cheung, Tsz Fung; Fan, Vei Chen; Sin, Kin Man; Wong, Chrisity Wai Yan; Leung, Gilberto Ka Kit

    2017-02-01

    Three-dimensional (3D) printing is a rapidly advancing technology in the field of surgery. This article reviews its contemporary applications in 3 aspects of surgery, namely, surgical planning, implants and prostheses, and education and training. Three-dimensional printing technology can contribute to surgical planning by depicting precise personalized anatomy and thus a potential improvement in surgical outcome. For implants and prosthesis, the technology might overcome the limitations of conventional methods such as visual discrepancy from the recipient's body and unmatching anatomy. In addition, 3D printing technology could be integrated into medical school curriculum, supplementing the conventional cadaver-based education and training in anatomy and surgery. Future potential applications of 3D printing in surgery, mainly in the areas of skin, nerve, and vascular graft preparation as well as ear reconstruction, are also discussed. Numerous trials and studies are still ongoing. However, scientists and clinicians are still encountering some limitations of the technology including high cost, long processing time, unsatisfactory mechanical properties, and suboptimal accuracy. These limitations might potentially hamper the applications of this technology in daily clinical practice.

  10. 2-D and 3-D oscillating wing aerodynamics for a range of angles of attack including stall

    NASA Technical Reports Server (NTRS)

    Piziali, R. A.

    1994-01-01

    A comprehensive experimental investigation of the pressure distribution over a semispan wing undergoing pitching motions representative of a helicopter rotor blade was conducted. Testing the wing in the nonrotating condition isolates the three-dimensional (3-D) blade aerodynamic and dynamic stall characteristics from the complications of the rotor blade environment. The test has generated a very complete, detailed, and accurate body of data. These data include static and dynamic pressure distributions, surface flow visualizations, two-dimensional (2-D) airfoil data from the same model and installation, and important supporting blockage and wall pressure distributions. This body of data is sufficiently comprehensive and accurate that it can be used for the validation of rotor blade aerodynamic models over a broad range of the important parameters including 3-D dynamic stall. This data report presents all the cycle-averaged lift, drag, and pitching moment coefficient data versus angle of attack obtained from the instantaneous pressure data for the 3-D wing and the 2-D airfoil. Also presented are examples of the following: cycle-to-cycle variations occurring for incipient or lightly stalled conditions; 3-D surface flow visualizations; supporting blockage and wall pressure distributions; and underlying detailed pressure results.

  11. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  12. 3D visualization of molecular structures in the MOGADOC database

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen

    2010-08-01

    The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.

  13. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less

  14. Visualization and quantification of three-dimensional distribution of yeast in bread dough.

    PubMed

    Maeda, Tatsuro; DO, Gab-Soo; Sugiyama, Junichi; Araki, Tetsuya; Tsuta, Mizuki; Shiraga, Seizaburo; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki

    2009-07-01

    A three-dimensional (3-D) bio-imaging technique was developed for visualizing and quantifying the 3-D distribution of yeast in frozen bread dough samples in accordance with the progress of the mixing process of the samples, applying cell-surface engineering to the surfaces of the yeast cells. The fluorescent yeast was recognized as bright spots at the wavelength of 520 nm. Frozen dough samples were sliced at intervals of 1 microm by an micro-slicer image processing system (MSIPS) equipped with a fluorescence microscope for acquiring cross-sectional images of the samples. A set of successive two-dimensional images was reconstructed to analyze the 3-D distribution of the yeast. The average shortest distance between centroids of enhanced green fluorescent protein (EGFP) yeasts was 10.7 microm at the pick-up stage, 9.7 microm at the clean-up stage, 9.0 microm at the final stage, and 10.2 microm at the over-mixing stage. The results indicated that the distribution of the yeast cells was the most uniform in the dough of white bread at the final stage, while the heterogeneous distribution at the over-mixing stage was possibly due to the destruction of the gluten network structure within the samples.

  15. Three-dimensional cell shapes and arrangements in human sweat glands as revealed by whole-mount immunostaining

    PubMed Central

    Kurata, Ryuichiro; Futaki, Sugiko; Nakano, Itsuko; Fujita, Fumitaka; Tanemura, Atsushi; Murota, Hiroyuki; Katayama, Ichiro; Okada, Fumihiro

    2017-01-01

    Because sweat secretion is facilitated by mechanical contraction of sweat gland structures, understanding their structure-function relationship could lead to more effective treatments for patients with sweat gland disorders such as heat stroke. Conventional histological studies have shown that sweat glands are three-dimensionally coiled tubular structures consisting of ducts and secretory portions, although their detailed structural anatomy remains unclear. To better understand the details of the three-dimensional (3D) coiled structures of sweat glands, a whole-mount staining method was employed to visualize 3D coiled gland structures with sweat gland markers for ductal luminal, ductal basal, secretory luminal, and myoepithelial cells. Imaging the 3D coiled gland structures demonstrated that the ducts and secretory portions were comprised of distinct tubular structures. Ductal tubules were occasionally bent, while secretory tubules were frequently bent and formed a self-entangled coiled structure. Whole-mount staining of complex coiled gland structures also revealed the detailed 3D cellular arrangements in the individual sweat gland compartments. Ducts were composed of regularly arranged cuboidal shaped cells, while secretory portions were surrounded by myoepithelial cells longitudinally elongated along entangled secretory tubules. Whole-mount staining was also used to visualize the spatial arrangement of blood vessels and nerve fibers, both of which facilitate sweat secretion. The blood vessels ran longitudinally parallel to the sweat gland tubules, while nerve fibers wrapped around secretory tubules, but not ductal tubules. Taken together, whole-mount staining of sweat glands revealed the 3D cell shapes and arrangements of complex coiled gland structures and provides insights into the mechanical contraction of coiled gland structures during sweat secretion. PMID:28636607

  16. 360-degree 3D transvaginal ultrasound system for high-dose-rate interstitial gynaecological brachytherapy needle guidance

    NASA Astrophysics Data System (ADS)

    Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron

    2017-03-01

    Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.

  17. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  18. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  19. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  20. Interactive 3D visualization speeds well, reservoir planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less

  1. SU-E-T-279: Realization of Three-Dimensional Conformal Dose Planning in Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Z; Jiang, S; Yang, Z

    2014-06-01

    Purpose: Successful clinical treatment in prostate brachytherapy is largely dependent on the effectiveness of pre-surgery dose planning. Conventional dose planning method could hardly arrive at a satisfy result. In this abstract, a three-dimensional conformal localized dose planning method is put forward to ensure the accuracy and effectiveness of pre-implantation dose planning. Methods: Using Monte Carlo method, the pre-calculated 3-D dose map for single source is obtained. As for multiple seeds dose distribution, the maps are combined linearly to acquire the 3-D distribution. The 3-D dose distribution is exhibited in the form of isodose surface together with reconstructed 3-D organs groupmore » real-timely. Then it is possible to observe the dose exposure to target volume and normal tissues intuitively, thus achieving maximum dose irradiation to treatment target and minimum healthy tissues damage. In addition, the exfoliation display of different isodose surfaces can be realized applying multi-values contour extraction algorithm based on voxels. The needles could be displayed in the system by tracking the position of the implanted seeds in real time to conduct block research in optimizing insertion trajectory. Results: This study extends dose planning from two-dimensional to three-dimensional, realizing the three-dimensional conformal irradiation, which could eliminate the limitations of 2-D images and two-dimensional dose planning. A software platform is developed using VC++ and Visualization Toolkit (VTK) to perform dose planning. The 3-D model reconstruction time is within three seconds (on a Intel Core i5 PC). Block research could be conducted to avoid inaccurate insertion into sensitive organs or internal obstructions. Experiments on eight prostate cancer cases prove that this study could make the dose planning results more reasonable. Conclusion: The three-dimensional conformal dose planning method could improve the rationality of dose planning by safely reducing the large target margin and avoiding dose dead zones for prostate cancer treatment. 1) National Natural Science Foundation of People's Republic of China (No. 51175373); 2) New Century Educational Talents Plan of Chinese Education Ministry (NCET-10-0625); 3) Scientific and Technological Major Project, Tianjin (No. 12ZCDZSY10600)« less

  2. One-stop shop for 3-dimensional anatomy of hepatic vasculature and bile duct with special reference to biliary image reconstruction.

    PubMed

    Enkhbold, Ch; Shimada, M; Utsunomiya, T; Ishibashi, H; Yamada, S; Kanamoto, M; Arakawa, Y; Ikemoto, Z; Morine, E; Imura, S

    2013-01-01

    Three-dimensional CT has become an essential tool for successful hepatic surgery. Up to now, efforts have been made to simultaneously visualize hepatic vasculature and bile ducts. Herein, we introduce a new one-stop shop approach to hepatic 3D-anatomy, using a standard enhanced MDCT alone. A 3D-reconstruction of hepatic vasculature was made using data from contrast enhanced MDCT and SYNAPSE VINCENT software. We identified bile ducts from axial 2D image, and then reconstructed the 3D image. Both hepatic vasculature and bile duct images were integrated into a single image and it was compared with the 3D image, utilized with MRCP or DIC-CT. The first branches of both the right and left hepatic ducts were hand-traced and visualized for all 100 cases. The second branches of these ducts were visualized in 69 cases, and only the right second branch was recognized in 52 cases. Anomalous variations of bile ducts, such as posterior branch joining into common hepatic duct, were recognized in 12 cases. These biliary tract variations were all confirmed by MRCP or DIC-CT. Our new one-stop shop approach using the 3D imaging technique might contribute to successful hepatectomy as well as reduce medical costs and radiation exposure by omission of MRCP and DIC-CT.

  3. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  4. The Fabric of the Universe: Exploring the Cosmic Web in 3D Prints and Woven Textiles

    NASA Astrophysics Data System (ADS)

    Diemer, Benedikt; Facio, Isaac

    2017-05-01

    We introduce The Fabric of the Universe, an art and science collaboration focused on exploring the cosmic web of dark matter with unconventional techniques and materials. We discuss two of our projects in detail. First, we describe a pipeline for translating three-dimensional (3D) density structures from N-body simulations into solid surfaces suitable for 3D printing, and present prints of a cosmological volume and of the infall region around a massive cluster halo. In these models, we discover wall-like features that are invisible in two-dimensional projections. Going beyond the sheer visualization of simulation data, we undertake an exploration of the cosmic web as a three-dimensional woven textile. To this end, we develop experimental 3D weaving techniques to create sphere-like and filamentary shapes and radically simplify a region of the cosmic web into a set of filaments and halos. We translate the resulting tree structure into a series of commands that can be executed by a digital weaving machine, and present a large-scale textile installation.

  5. Java 3D Interactive Visualization for Astrophysics

    NASA Astrophysics Data System (ADS)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  6. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    PubMed

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  7. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points and regions of interest while the GIS component responds accordingly by changing the scenario in a natural disaster application. Creating a 3D model representation of geospatial data provides a natural way for users to perceive and interact with space. To the best of our knowledge it is the first attempt to use Kinect II for GIS applications and generally virtual environments using novel Human Computer Interaction methods. Under a robust decision support system, the users are able to interact, combine and computationally analyze information in three dimensions using gestures. This study promotes geographic awareness and education and will prove beneficial for a wide range of geoscience applications including natural disaster and emergency management. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.

  8. Three-dimensional analysis of scoliosis surgery using stereophotogrammetry

    NASA Astrophysics Data System (ADS)

    Jang, Stanley B.; Booth, Kellogg S.; Reilly, Chris W.; Sawatzky, Bonita J.; Tredwell, Stephen J.

    1994-04-01

    A new stereophotogrammetric analysis and 3D visualization allow accurate assessment of the scoliotic spine during instrumentation. Stereophoto pairs taken at each stage of the operation and robust statistical techniques are used to compute 3D transformations of the vertebrae between stages. These determine rotation, translation, goodness of fit, and overall spinal contour. A polygonal model of the spine using commercial 3D modeling package is used to produce an animation sequence of the transformation. The visualization have provided some important observation. Correction of the scoliosis is achieved largely through vertebral translation and coronal plane rotation, contrary to claims that large axial rotations are required. The animations provide valuable qualitative information for surgeons assessing the results of scoliotic correction.

  9. Design and application of BIM based digital sand table for construction management

    NASA Astrophysics Data System (ADS)

    Fuquan, JI; Jianqiang, LI; Weijia, LIU

    2018-05-01

    This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.

  10. Micro-Mirrors for Nanoscale Three-Dimensional Microscopy

    PubMed Central

    Seale, Kevin; Janetopoulos, Chris; Wikswo, John

    2013-01-01

    A research-grade optical microscope is capable of resolving fine structures in two-dimensional images. However, three-dimensional resolution, or the ability of the microscope to distinguish between objects lying above or below the focal plane from in-focus objects, is not nearly as good as in-plane resolution. In this issue of ACS Nano, McMahon et al. report the use of mirrored pyramidal wells with a conventional microscope for rapid, 3D localization and tracking of nanoparticles. Mirrors have been used in microscopy before, but recent work with MPWs is unique because it enables the rapid determination of the x-, y-, and z-position of freely diffusing nanoparticles and cellular nanostructures with unprecedented speed and spatial accuracy. As inexpensive tools for 3D visualization, mirrored pyramidal wells may prove to be invaluable aids in nanotechnology and engineering of nanomaterials. PMID:19309167

  11. Three-dimensional printing of orbital and peri-orbital masses in three dogs and its potential applications in veterinary ophthalmology.

    PubMed

    Dorbandt, Daniel M; Joslyn, Stephen K; Hamor, Ralph E

    2017-01-01

    To describe the technique and utility of three-dimensional (3D) printing for orbital and peri-orbital masses and discuss other potential applications for 3D printing. Three dogs with a chronic history of nonpainful exophthalmos. Computed tomography (CT) and subsequent 3D printing of the head was performed on each case. CT confirmed a confined mass, and an ultrasound-guided biopsy was obtained in each circumstance. An orbitotomy was tentatively planned for each case, and a 3D print of each head with the associated globe and mass was created to assist in surgical planning. In case 1, the mass was located in the cranioventral aspect of the right orbit, and the histopathologic diagnosis was adenoma. In case 2, the mass was located within the lateral masseter muscle, ventral to the right orbit between the zygomatic arch and the ramus of the mandible. The histopathologic diagnosis in case 2 was consistent with a lipoma. In case 3, the mass was located in the ventral orbit, and the histopathologic diagnosis was histiocytic cellular infiltrate. Three-dimensional printing in cases with orbital and peri-orbital masses has exceptional potential for improved surgical planning and provides another modality for visualization to help veterinarians, students, and owners understand distribution of disease. Additionally, as the techniques of 3D printing continue to evolve, the potential exists to revolutionize ocular surgery and drug delivery. © 2016 American College of Veterinary Ophthalmologists.

  12. Diagnostic radiograph based 3D bone reconstruction framework: application to the femur.

    PubMed

    Gamage, P; Xie, S Q; Delmas, P; Xu, W L

    2011-09-01

    Three dimensional (3D) visualization of anatomy plays an important role in image guided orthopedic surgery and ultimately motivates minimally invasive procedures. However, direct 3D imaging modalities such as Computed Tomography (CT) are restricted to a minority of complex orthopedic procedures. Thus the diagnostics and planning of many interventions still rely on two dimensional (2D) radiographic images, where the surgeon has to mentally visualize the anatomy of interest. The purpose of this paper is to apply and validate a bi-planar 3D reconstruction methodology driven by prominent bony anatomy edges and contours identified on orthogonal radiographs. The results obtained through the proposed methodology are benchmarked against 3D CT scan data to assess the accuracy of reconstruction. The human femur has been used as the anatomy of interest throughout the paper. The novelty of this methodology is that it not only involves the outer contours of the bony anatomy in the reconstruction but also several key interior edges identifiable on radiographic images. Hence, this framework is not simply limited to long bones, but is generally applicable to a multitude of other bony anatomies as illustrated in the results section. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  14. Utilization of high resolution computed tomography to visualize the three dimensional structure and function of plant vasculature

    USDA-ARS?s Scientific Manuscript database

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...

  15. Three-Dimensional Printing and Its Applications in Otorhinolaryngology-Head and Neck Surgery.

    PubMed

    Crafts, Trevor D; Ellsperman, Susan E; Wannemuehler, Todd J; Bellicchi, Travis D; Shipchandler, Taha Z; Mantravadi, Avinash V

    2017-06-01

    Objective Three-dimensional (3D)-printing technology is being employed in a variety of medical and surgical specialties to improve patient care and advance resident physician training. As the costs of implementing 3D printing have declined, the use of this technology has expanded, especially within surgical specialties. This article explores the types of 3D printing available, highlights the benefits and drawbacks of each methodology, provides examples of how 3D printing has been applied within the field of otolaryngology-head and neck surgery, discusses future innovations, and explores the financial impact of these advances. Data Sources Articles were identified from PubMed and Ovid MEDLINE. Review Methods PubMed and Ovid Medline were queried for English articles published between 2011 and 2016, including a few articles prior to this time as relevant examples. Search terms included 3-dimensional printing, 3 D printing, otolaryngology, additive manufacturing, craniofacial, reconstruction, temporal bone, airway, sinus, cost, and anatomic models. Conclusions Three-dimensional printing has been used in recent years in otolaryngology for preoperative planning, education, prostheses, grafting, and reconstruction. Emerging technologies include the printing of tissue scaffolds for the auricle and nose, more realistic training models, and personalized implantable medical devices. Implications for Practice After the up-front costs of 3D printing are accounted for, its utilization in surgical models, patient-specific implants, and custom instruments can reduce operating room time and thus decrease costs. Educational and training models provide an opportunity to better visualize anomalies, practice surgical technique, predict problems that might arise, and improve quality by reducing mistakes.

  16. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  17. Advances in high-resolution imaging--techniques for three-dimensional imaging of cellular structures.

    PubMed

    Lidke, Diane S; Lidke, Keith A

    2012-06-01

    A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques.

  18. An MR-compatible stereoscopic in-room 3D display for MR-guided interventions.

    PubMed

    Brunner, Alexander; Groebner, Jens; Umathum, Reiner; Maier, Florian; Semmler, Wolfhard; Bock, Michael

    2014-08-01

    A commercial three-dimensional (3D) monitor was modified for use inside the scanner room to provide stereoscopic real-time visualization during magnetic resonance (MR)-guided interventions, and tested in a catheter-tracking phantom experiment at 1.5 T. Brightness, uniformity, radio frequency (RF) emissions and MR image interferences were measured. Due to modifications, the center luminance of the 3D monitor was reduced by 14%, and the addition of a Faraday shield further reduced the remaining luminance by 31%. RF emissions could be effectively shielded; only a minor signal-to-noise ratio (SNR) decrease of 4.6% was observed during imaging. During the tracking experiment, the 3D orientation of the catheter and vessel structures in the phantom could be visualized stereoscopically.

  19. Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system

    NASA Astrophysics Data System (ADS)

    Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu

    2018-09-01

    A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.

  20. Three-dimensional contrasted visualization of pancreas in rats using clinical MRI and CT scanners.

    PubMed

    Yin, Ting; Coudyzer, Walter; Peeters, Ronald; Liu, Yewei; Cona, Marlein Miranda; Feng, Yuanbo; Xia, Qian; Yu, Jie; Jiang, Yansheng; Dymarkowski, Steven; Huang, Gang; Chen, Feng; Oyen, Raymond; Ni, Yicheng

    2015-01-01

    The purpose of this work was to visualize the pancreas in post-mortem rats with local contrast medium infusion by three-dimensional (3D) magnetic resonance imaging (MRI) and computed tomography (CT) using clinical imagers. A total of 16 Sprague Dawley rats of about 300 g were used for the pancreas visualization. Following the baseline imaging, a mixed contrast medium dye called GadoIodo-EB containing optimized concentrations of Gd-DOTA, iomeprol and Evens blue was infused into the distally obstructed common bile duct (CBD) for post-contrast imaging with 3.0 T MRI and 128-slice CT scanners. Images were post-processed with the MeVisLab software package. MRI findings were co-registered with CT scans and validated with histomorphology, with relative contrast ratios quantified. Without contrast enhancement, the pancreas was indiscernible. After infusion of GadoIodo-EB solution, only the pancreatic region became outstandingly visible, as shown by 3D rendering MRI and CT and proven by colored dissection and histological examinations. The measured volume of the pancreas averaged 1.12 ± 0.04 cm(3) after standardization. Relative contrast ratios were 93.28 ± 34.61% and 26.45 ± 5.29% for MRI and CT respectively. We have developed a multifunctional contrast medium dye to help clearly visualize and delineate rat pancreas in situ using clinical MRI and CT scanners. The topographic landmarks thus created with 3D demonstration may help to provide guidelines for the next in vivo pancreatic MRI research in rodents. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Digital preservation of anatomical variation: 3D-modeling of embalmed and plastinated cadaveric specimens using uCT and MRI.

    PubMed

    Moore, Colin W; Wilson, Timothy D; Rice, Charles L

    2017-01-01

    Anatomy educators have an opportunity to teach anatomical variations as a part of medical and allied health curricula using both cadaveric and three-dimensional (3D) digital models of these specimens. Beyond published cadaveric case reports, anatomical variations identified during routine gross anatomy dissection can be powerful teaching tools and a medium to discuss several anatomical sub-disciplines from embryology to medical imaging. The purpose of this study is to document how cadaveric anatomical variation identified during routine dissection can be scanned using medical imaging techniques to create two-dimensional axial images and interactive 3D models for teaching and learning of anatomical variations. Three cadaveric specimens (2 formalin embalmed, 1 plastinated) depicting anatomical variations and an embryological malformation were scanned using magnetic resonance imaging (MRI) and micro-computed tomography (μCT) for visualization in cross-section and for creation of 3D volumetric models. Results provide educational options to enable visualization and facilitate learning of anatomical variations from cross-sectional scans. Furthermore, the variations can be highlighted, digitized, modeled and manipulated using 3D imaging software and viewed in the anatomy laboratory in conjunction with traditional anatomical dissection. This study provides an example for anatomy educators to teach and describe anatomical variations in the undergraduate medical curriculum. Copyright © 2016 Elsevier GmbH. All rights reserved.

  2. A system and methodology for high-content visual screening of individual intact living cells in suspension

    NASA Astrophysics Data System (ADS)

    Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer

    2007-02-01

    Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.

  3. Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues

    PubMed Central

    Leopold, David A.; Humphreys, Glyn W.; Welchman, Andrew E.

    2016-01-01

    The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269606

  4. "Building" 3D visualization skills in mineralogy

    NASA Astrophysics Data System (ADS)

    Gaudio, S. J.; Ajoku, C. N.; McCarthy, B. S.; Lambart, S.

    2016-12-01

    Studying mineralogy is fundamental for understanding the composition and physical behavior of natural materials in terrestrial and extraterrestrial environments. However, some students struggle and ultimately get discouraged with mineralogy course material because they lack well-developed spatial visualization skills that are needed to deal with three-dimensional (3D) objects, such as crystal forms or atomic-scale structures, typically represented in two-dimensional (2D) space. Fortunately, spatial visualization can improve with practice. Our presentation demonstrates a set of experiential learning activities designed to support the development and improvement of spatial visualization skills in mineralogy using commercially available magnetic building tiles, rods, and spheres. These instructional support activities guide students in the creation of 3D models that replicate macroscopic crystal forms and atomic-scale structures in a low-pressure learning environment and at low cost. Students physically manipulate square and triangularly shaped magnetic tiles to build 3D open and closed crystal forms (platonic solids, prisms, pyramids and pinacoids). Prismatic shapes with different closing forms are used to demonstrate the relationship between crystal faces and Miller Indices. Silica tetrahedra and octahedra are constructed out of magnetic rods (bonds) and spheres (oxygen atoms) to illustrate polymerization, connectivity, and the consequences for mineral formulae. In another activity, students practice the identification of symmetry elements and plane lattice types by laying magnetic rods and spheres over wallpaper patterns. The spatial visualization skills developed and improved through our experiential learning activities are critical to the study of mineralogy and many other geology sub-disciplines. We will also present pre- and post- activity assessments that are aligned with explicit learning outcomes.

  5. Geological mapping goes 3-D in response to societal needs

    USGS Publications Warehouse

    Thorleifson, H.; Berg, R.C.; Russell, H.A.J.

    2010-01-01

    The transition to 3-D mapping has been made possible by technological advances in digital cartography, GIS, data storage, analysis, and visualization. Despite various challenges, technological advancements facilitated a gradual transition from 2-D maps to 2.5-D draped maps to 3-D geological mapping, supported by digital spatial and relational databases that can be interrogated horizontally or vertically and viewed interactively. Challenges associated with data collection, human resources, and information management are daunting due to their resource and training requirements. The exchange of strategies at the workshops has highlighted the use of basin analysis to develop a process-based predictive knowledge framework that facilitates data integration. Three-dimensional geological information meets a public demand that fills in the blanks left by conventional 2-D mapping. Two-dimensional mapping will, however, remain the standard method for extensive areas of complex geology, particularly where deformed igneous and metamorphic rocks defy attempts at 3-D depiction.

  6. High brightness x ray source for directed energy and holographic imaging applications, phase 2

    NASA Astrophysics Data System (ADS)

    McPherson, Armon; Rhodes, Charles K.

    1992-03-01

    Advances in x-ray imaging technology and x-ray sources are such that a new technology can be brought to commercialization enabling the three-dimensional (3-D) microvisualization of hydrated biological specimens. The Company is engaged in a program whose main goal is the development of a new technology for direct three dimensional (3-D) x-ray holographic imaging. It is believed that this technology will have a wide range of important applications in the defense, medical, and scientific sectors. For example, in the medical area, it is expected that biomedical science will constitute a very active and substantial market, because the application of physical technologies for the direct visualization of biological entities has had a long and extremely fruitful history.

  7. Three-dimensional landing zone joint capability technology demonstration

    NASA Astrophysics Data System (ADS)

    Savage, James; Goodrich, Shawn; Ott, Carl; Szoboszlay, Zoltan; Perez, Alfonso; Soukup, Joel; Burns, H. N.

    2014-06-01

    The Three-Dimensional Landing Zone (3D-LZ) Joint Capability Technology Demonstration (JCTD) is a 27-month program to develop an integrated LADAR and FLIR capability upgrade for USAF Combat Search and Rescue HH-60G Pave Hawk helicopters through a retrofit of current Raytheon AN/AAQ-29 turret systems. The 3D-LZ JCTD builds upon a history of technology programs using high-resolution, imaging LADAR to address rotorcraft cruise, approach to landing, landing, and take-off in degraded visual environments with emphasis on brownout, cable warning and obstacle avoidance, and avoidance of controlled flight into terrain. This paper summarizes ladar development, flight test milestones, and plans for a final flight test demonstration and Military Utility Assessment in 2014.

  8. Superimposition of 3-dimensional cone-beam computed tomography models of growing patients

    PubMed Central

    Cevidanes, Lucia H. C.; Heymann, Gavin; Cornelis, Marie A.; DeClerck, Hugo J.; Tulloch, J. F. Camilla

    2009-01-01

    Introduction The objective of this study was to evaluate a new method for superimposition of 3-dimensional (3D) models of growing subjects. Methods Cone-beam computed tomography scans were taken before and after Class III malocclusion orthopedic treatment with miniplates. Three observers independently constructed 18 3D virtual surface models from cone-beam computed tomography scans of 3 patients. Separate 3D models were constructed for soft-tissue, cranial base, maxillary, and mandibular surfaces. The anterior cranial fossa was used to register the 3D models of before and after treatment (about 1 year of follow-up). Results Three-dimensional overlays of superimposed models and 3D color-coded displacement maps allowed visual and quantitative assessment of growth and treatment changes. The range of interobserver errors for each anatomic region was 0.4 mm for the zygomatic process of maxilla, chin, condyles, posterior border of the rami, and lower border of the mandible, and 0.5 mm for the anterior maxilla soft-tissue upper lip. Conclusions Our results suggest that this method is a valid and reproducible assessment of treatment outcomes for growing subjects. This technique can be used to identify maxillary and mandibular positional changes and bone remodeling relative to the anterior cranial fossa. PMID:19577154

  9. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  10. Predicting Student Performance in Sonographic Scanning Using Spatial Ability as an Ability Determinent of Skill Acquisition

    ERIC Educational Resources Information Center

    Clem, Douglas Wayne

    2012-01-01

    Spatial ability refers to an individual's capacity to visualize and mentally manipulate three dimensional objects. Since sonographers manually manipulate 2D and 3D sonographic images to generate multi-viewed, logical, sequential renderings of an anatomical structure, it can be assumed that spatial ability is central to the perception and…

  11. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson’s Disease

    PubMed Central

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862

  12. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  13. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. (©)RSNA, 2015.

  14. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  15. Generation of three-dimensional retinal organoids expressing rhodopsin and S- and M-cone opsins from mouse stem cells.

    PubMed

    Ueda, Kaori; Onishi, Akishi; Ito, Shin-Ichiro; Nakamura, Makoto; Takahashi, Masayo

    2018-01-22

    Three-dimensional retinal organoids can be differentiated from embryonic stem cells/induced pluripotent stem cells (ES/iPS cells) under defined medium conditions. We modified the serum-free floating culture of embryoid body-like aggregates with quick reaggregation (SFEBq) culture procedure to obtain retinal organoids expressing more rod photoreceptors and S- and M-cone opsins. Retinal organoids differentiated from mouse Nrl-eGFP iPS cells were cultured in various mediums during photoreceptor development. To promote rod photoreceptor development, organoids were maintained in media containing 9-cis retinoic acids (9cRA). To obtain retinal organoids with M-opsin expression, we cultured in medium with 1% fetal bovine serum (FBS) supplemented with T3, BMP4, and DAPT. Section immunohistochemistry was performed to visualize the expression of photoreceptor markers. In three-dimensional (3D) retinas exposed to 9cRA, rhodopsin was expressed earlier and S-cone opsins were suppressed. We could maintain 3D retinas up to DD 35 in culture media with 1% FBS. The 3D retinas expressed rhodopsin, S- and M-opsins, but most cone photoreceptors expressed either S- or M-opsins. By modifying culture conditions in the SFEBq protocol, we obtained rod-dominated 3D retinas and S- and M-opsin expressing 3D retinas. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. High-resolution three-dimensional visualization of the rat spinal cord microvasculature by synchrotron radiation micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianzhong; Cao, Yong; Wu, Tianding

    2014-10-15

    Purpose: Understanding the three-dimensional (3D) morphology of the spinal cord microvasculature has been limited by the lack of an effective high-resolution imaging technique. In this study, synchrotron radiation microcomputed tomography (SRµCT), a novel imaging technique based on absorption imaging, was evaluated with regard to the detection of the 3D morphology of the rat spinal cord microvasculature. Methods: Ten Sprague-Dawley rats were used in this ex vivo study. After contrast agent perfusion, their spinal cords were isolated and scanned using conventional x-rays, conventional micro-CT (CµCT), and SRµCT. Results: Based on contrast agent perfusion, the microvasculature of the rat spinal cord wasmore » clearly visualized for the first time ex vivo in 3D by means of SRµCT scanning. Compared to conventional imaging techniques, SRµCT achieved higher resolution 3D vascular imaging, with the smallest vessel that could be distinguished approximately 7.4 μm in diameter. Additionally, a 3D pseudocolored image of the spinal cord microvasculature was generated in a single session of SRµCT imaging, which was conducive to detailed observation of the vessel morphology. Conclusions: The results of this study indicated that SRµCT scanning could provide higher resolution images of the vascular network of the spinal cord. This modality also has the potential to serve as a powerful imaging tool for the investigation of morphology changes in the 3D angioarchitecture of the neurovasculature in preclinical research.« less

  17. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  18. Three-dimensional contrast-enhanced magnetic resonance angiography for anterolateral thigh flap outlining: A retrospective case series of 68 patients.

    PubMed

    Jiang, Chunjing; Lin, Ping; Fu, Xiaoyan; Shu, Jiner; Li, Huimin; Hu, Xiaogang; He, Jianrong; Ding, Mingxing

    2016-08-01

    Flap transfer is increasingly used for repairing limb defects secondary to trauma or tumor, and appropriate preoperative planning plays a critical role. The present study aimed to examine the use of three-dimensional (3D) contrast-enhanced magnetic resonance angiography (CE-MRA) in evaluating the blood supply distribution and perforating branch pattern of anterolateral thigh (ALT) flaps. Bilateral donor lower limbs were scanned in 68 patients (136 limbs) using a Siemens Avanto 1.5 T magnetic resonance imaging scanner with a 3D fast low-angle shot sequence, following the thin-slab maximum intensity projection (TS-MIP) technique. The lateral femoral circumflex artery (LFCA) was visualized in all patients: 101 limbs (101/136, 74.3%) were type I; 20 limbs (20/136, 14.7%) were type II; 3 limbs (3/136, 2.2%) were type III; and 12 limbs (12/136, 8.8%) were type IV. Tertiary branches were identified in 94 limbs (94/136, 69.1%). Donor flaps were outlined according to MRA TS-MIP findings in 4 patients. All flaps survived uneventfully following the transfer. In donor flap outlining, 3D CE-MRA with the TS-MIP technique allowed an accurate, direct visualization of the branching pattern and distribution profile of the LFCA supplying the ALT flap.

  19. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  20. Transforming Clinical Imaging Data for Virtual Reality Learning Objects

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Rosset, Antoine

    2008-01-01

    Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…

  1. 3D-Printing in Congenital Cardiology: From Flatland to Spaceland.

    PubMed

    Deferm, Sébastien; Meyns, Bart; Vlasselaers, Dirk; Budts, Werner

    2016-01-01

    Medical imaging has changed to a great extent over the past few decades. It has been revolutionized by three-dimensional (3D) imaging techniques. Despite much of modern medicine relying on 3D imaging, which can be obtained accurately, we keep on being limited by visualization of the 3D content on two-dimensional flat screens. 3D-printing of graspable models could become a feasible technique to overcome this gap. Therefore, we printed pre- and postoperative 3D-models of a complex congenital heart defect. With this example, we intend to illustrate that these models hold value in preoperative planning, postoperative evaluation of a complex procedure, communication with the patient, and education of trainees. At this moment, 3D printing only leaves a small footprint, but makes already a big impression in the domain of cardiology and cardiovascular surgery. Further studies including more patients and more validated applications are needed to streamline 3D printing in the clinical setting of daily practice.

  2. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  3. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  4. Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.

    PubMed

    Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C

    2004-11-01

    Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.

  5. Three-dimensional nanoscale molecular imaging by extreme ultraviolet laser ablation mass spectrometry

    PubMed Central

    Kuznetsov, Ilya; Filevich, Jorge; Dong, Feng; Woolston, Mark; Chao, Weilun; Anderson, Erik H.; Bernstein, Elliot R.; Crick, Dean C.; Rocca, Jorge J.; Menoni, Carmen S.

    2015-01-01

    Analytical probes capable of mapping molecular composition at the nanoscale are of critical importance to materials research, biology and medicine. Mass spectral imaging makes it possible to visualize the spatial organization of multiple molecular components at a sample's surface. However, it is challenging for mass spectral imaging to map molecular composition in three dimensions (3D) with submicron resolution. Here we describe a mass spectral imaging method that exploits the high 3D localization of absorbed extreme ultraviolet laser light and its fundamentally distinct interaction with matter to determine molecular composition from a volume as small as 50 zl in a single laser shot. Molecular imaging with a lateral resolution of 75 nm and a depth resolution of 20 nm is demonstrated. These results open opportunities to visualize chemical composition and chemical changes in 3D at the nanoscale. PMID:25903827

  6. Pole Figure Explorer v. 1.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Benthem, Mark H.

    2016-05-04

    This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less

  7. A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.

    PubMed

    Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T

    2007-09-01

    To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.

  8. Three dimensional characterization of laser ablation craters using high resolution X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Galmed, A. H.; du Plessis, A.; le Roux, S. G.; Hartnick, E.; Von Bergmann, H.; Maaza, M.

    2018-01-01

    Laboratory X-ray computed tomography is an emerging technology for the 3D characterization and dimensional analysis of many types of materials. In this work we demonstrate the usefulness of this characterization method for the full three dimensional analysis of laser ablation craters, in the context of a laser induced breakdown spectroscopy setup. Laser induced breakdown spectroscopy relies on laser ablation for sampling the material of interest. We demonstrate here qualitatively (in images) and quantitatively (in terms of crater cone angles, depths, diameters and volume) laser ablation crater analysis in 3D for metal (aluminum) and rock (false gold ore). We show the effect of a Gaussian beam profile on the resulting crater geometry, as well as the first visual evidence of undercutting in the rock sample, most likely due to ejection of relatively large grains. The method holds promise for optimization of laser ablation setups especially for laser induced breakdown spectroscopy.

  9. Regular three-dimensional presentations improve in the identification of surgical liver anatomy - a randomized study.

    PubMed

    Müller-Stich, Beat P; Löb, Nicole; Wald, Diana; Bruckner, Thomas; Meinzer, Hans-Peter; Kadmon, Martina; Büchler, Markus W; Fischer, Lars

    2013-09-25

    Three-dimensional (3D) presentations enhance the understanding of complex anatomical structures. However, it has been shown that two dimensional (2D) "key views" of anatomical structures may suffice in order to improve spatial understanding. The impact of real 3D images (3Dr) visible only with 3D glasses has not been examined yet. Contrary to 3Dr, regular 3D images apply techniques such as shadows and different grades of transparency to create the impression of 3D.This randomized study aimed to define the impact of both the addition of key views to CT images (2D+) and the use of 3Dr on the identification of liver anatomy in comparison with regular 3D presentations (3D). A computer-based teaching module (TM) was used. Medical students were randomized to three groups (2D+ or 3Dr or 3D) and asked to answer 11 anatomical questions and 4 evaluative questions. Both 3D groups had animated models of the human liver available to them which could be moved in all directions. 156 medical students (57.7% female) participated in this randomized trial. Students exposed to 3Dr and 3D performed significantly better than those exposed to 2D+ (p < 0.01, ANOVA). There were no significant differences between 3D and 3Dr and no significant gender differences (p > 0.1, t-test). Students randomized to 3D and 3Dr not only had significantly better results, but they also were significantly faster in answering the 11 anatomical questions when compared to students randomized to 2D+ (p < 0.03, ANOVA). Whether or not "key views" were used had no significant impact on the number of correct answers (p > 0.3, t-test). This randomized trial confirms that regular 3D visualization improve the identification of liver anatomy.

  10. Leaving the structural ivory tower, assisted by interactive 3D PDF.

    PubMed

    Kumar, Pravin; Ziegler, Alexander; Grahn, Alexander; Hee, Chee Seng; Ziegler, Andreas

    2010-08-01

    The ability to embed interactive three-dimensional (3D) models into electronic publications in portable document format (PDF) greatly enhances the accessibility of molecular structures. Here, we report advances in this procedure and discuss what is needed to develop this format into a truly useful tool for the structural biology community as well as for readers who are less well trained in molecular visualization. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. [Preliminary construction of three-dimensional visual educational system for clinical dentistry based on world wide web webpage].

    PubMed

    Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning

    2009-09-01

    To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.

  12. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  13. Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.

    PubMed

    Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong

    2006-04-01

    This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.

  14. Bringing macromolecular machinery to life using 3D animation.

    PubMed

    Iwasa, Janet H

    2015-04-01

    Over the past decade, there has been a rapid rise in the use of three-dimensional (3D) animation to depict molecular and cellular processes. Much of the growth in molecular animation has been in the educational arena, but increasingly, 3D animation software is finding its way into research laboratories. In this review, I will discuss a number of ways in which 3d animation software can play a valuable role in visualizing and communicating macromolecular structures and dynamics. I will also consider the challenges of using animation tools within the research sphere. Copyright © 2015. Published by Elsevier Ltd.

  15. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  16. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  17. Influence of eddy current, Maxwell and gradient field corrections on 3D flow visualization of 3D CINE PC-MRI data.

    PubMed

    Lorenz, Ramona; Bock, Jelena; Snyder, Jeff; Korvink, Jan G; Jung, Bernd A; Markl, Michael

    2014-07-01

    The measurement of velocities based on phase contrast MRI can be subject to different phase offset errors which can affect the accuracy of velocity data. The purpose of this study was to determine the impact of these inaccuracies and to evaluate different correction strategies on three-dimensional visualization. Phase contrast MRI was performed on a 3 T system (Siemens Trio) for in vitro (curved/straight tube models; venc: 0.3 m/s) and in vivo (aorta/intracranial vasculature; venc: 1.5/0.4 m/s) data. For comparison of the impact of different magnetic field gradient designs, in vitro data was additionally acquired on a wide bore 1.5 T system (Siemens Espree). Different correction methods were applied to correct for eddy currents, Maxwell terms, and gradient field inhomogeneities. The application of phase offset correction methods lead to an improvement of three-dimensional particle trace visualization and count. The most pronounced differences were found for in vivo/in vitro data (68%/82% more particle traces) acquired with a low venc (0.3 m/s/0.4 m/s, respectively). In vivo data acquired with high venc (1.5 m/s) showed noticeable but only minor improvement. This study suggests that the correction of phase offset errors can be important for a more reliable visualization of particle traces but is strongly dependent on the velocity sensitivity, object geometry, and gradient coil design. Copyright © 2013 Wiley Periodicals, Inc.

  18. Underwater behavior of sperm whales off Kaikoura, New Zealand, as revealed by a three-dimensional hydrophone array.

    PubMed

    Miller, Brian; Dawson, Stephen; Vennell, Ross

    2013-10-01

    Observations are presented of the vocal behavior and three dimensional (3D) underwater movements of sperm whales measured with a passive acoustic array off the coast of Kaikoura, New Zealand. Visual observations and vocal behaviors of whales were used to divide dive tracks into different phases, and depths and movements of whales are reported for each of these phases. Diving depths and movement information from 75 3D tracks of whales in Kaikoura are compared to one and two dimensional tracks of whales studied in other oceans. While diving, whales in Kaikoura had a mean swimming speed of 1.57 m/s, and, on average, dived to a depth of 427 m (SD = 117 m), spending most of their time at depths between 300 and 600 m. Creak vocalizations, assumed to be the prey capture phase of echolocation, occurred throughout the water column from sea surface to sea floor, but most occurred at depths of 400-550 m. Three dimensional measurement of tracking revealed several different "foraging" strategies, including active chasing of prey, lining up slow-moving or unsuspecting prey, and foraging on demersal or benthic prey. These movements provide the first 3D descriptions underwater behavior of whales at Kaikoura.

  19. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Visualization resources for Iowa State University and the Iowa DOT : an automated design model to simulator converter.

    DOT National Transportation Integrated Search

    2012-11-01

    This project developed an automatic conversion software tool that takes input a from an Iowa Department of Transportation (DOT) MicroStation three-dimensional (3D) design file and converts it into a form that can be used by the University of Iowas...

  1. Theoretical Analysis of Novel Quasi-3D Microscopy of Cell Deformation

    PubMed Central

    Qiu, Jun; Baik, Andrew D.; Lu, X. Lucas; Hillman, Elizabeth M. C.; Zhuang, Zhuo; Guo, X. Edward

    2012-01-01

    A novel quasi-three-dimensional (quasi-3D) microscopy technique has been developed to enable visualization of a cell under dynamic loading in two orthogonal planes simultaneously. The three-dimensional (3D) dynamics of the mechanical behavior of a cell under fluid flow can be examined at a high temporal resolution. In this study, a numerical model of a fluorescently dyed cell was created in 3D space, and the cell was subjected to uniaxial deformation or unidirectional fluid shear flow via finite element analysis (FEA). Therefore, the intracellular deformation in the simulated cells was exactly prescribed. Two-dimensional fluorescent images simulating the quasi-3D technique were created from the cell and its deformed states in 3D space using a point-spread function (PSF) and a convolution operation. These simulated original and deformed images were processed by a digital image correlation technique to calculate quasi-3D-based intracellular strains. The calculated strains were compared to the prescribed strains, thus providing a theoretical basis for the measurement of the accuracy of quasi-3D and wide-field microscopy-based intracellular strain measurements against the true 3D strains. The signal-to-noise ratio (SNR) of the simulated quasi-3D images was also modulated using additive Gaussian noise, and a minimum SNR of 12 was needed to recover the prescribed strains using digital image correlation. Our computational study demonstrated that quasi-3D strain measurements closely recovered the true 3D strains in uniform and fluid flow cellular strain states to within 5% strain error. PMID:22707985

  2. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Entrainment and high-density three-dimensional mapping in right atrial macroreentry provide critical complementary information: Entrainment may unmask "visual reentry" as passive.

    PubMed

    Pathik, Bhupesh; Lee, Geoffrey; Nalliah, Chrishan; Joseph, Stephen; Morton, Joseph B; Sparks, Paul B; Sanders, Prashanthan; Kistler, Peter M; Kalman, Jonathan M

    2017-10-01

    With the recent advent of high-density (HD) 3-dimensional (3D) mapping, the utility of entrainment is uncertain. However, the limitations of visual representation and interpretation of these high-resolution 3D maps are unclear. The purpose of this study was to determine the strengths and limitations of both HD 3D mapping and entrainment mapping during mapping of right atrial macroreentry. Fifteen patients were studied. The number and type of circuits accounting for ≥90% of the tachycardia cycle length using HD 3D mapping were verified using systematic entrainment mapping. Entrainment sites with an unexpectedly long postpacing interval despite proximity to the active circuit were evaluated. Based on HD 3D mapping, 27 circuits were observed: 12 peritricuspid, 2 upper loop reentry, 10 lower loop reentry, and 3 lateral wall circuits. With entrainment, 17 of the 27 circuits were active: all 12 peritricuspid and 2 upper loop reentry. However, lower loop reentry was confirmed in only 3 of 10, and none of the 3 lateral wall circuits were present. Mean percentage of tachycardia cycle length covered by active circuits was 98% ± 1% vs 97% ± 2% for passive circuits (P = .09). None of the 345 entrainment runs terminated tachycardia or changed tachycardia mechanism. In 8 of 15 patients, 13 examples of unexpectedly long postpacing interval were observed at entrainment sites located distal to localized zones of slow conduction seen on HD 3D mapping. Using HD 3D mapping, "visual reentry" may be due to passive circuitous propagation rather than a critical reentrant circuit. HD 3D mapping provides new insights into regional conduction and helps explain unusual entrainment phenomena. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  4. A Case Study in Astronomical 3D Printing: The Mysterious η Carinae

    NASA Astrophysics Data System (ADS)

    Madura, Thomas I.

    2017-05-01

    Three-dimensional (3D) printing moves beyond interactive 3D graphics and provides an excellent tool for both visual and tactile learners, since 3D printing can now easily communicate complex geometries and full color information. Some limitations of interactive 3D graphics are also alleviated by 3D printable models, including issues of limited software support, portability, accessibility, and sustainability. We describe the motivations, methods, and results of our work on using 3D printing (1) to visualize and understand the η Car Homunculus nebula and central binary system and (2) for astronomy outreach and education, specifically, with visually impaired students. One new result we present is the ability to 3D print full-color models of η Car’s colliding stellar winds. We also demonstrate how 3D printing has helped us communicate our improved understanding of the detailed structure of η Car’s Homunculus nebula and central binary colliding stellar winds, and their links to each other. Attached to this article are full-color 3D printable files of both a red-blue Homunculus model and the η Car colliding stellar winds at orbital phase 1.045. 3D printing could prove to be vital to how astronomer’s reach out and share their work with each other, the public, and new audiences.

  5. Study of the structure of 3-D composites based on carbon nanotubes in bovine serum albumin matrix by X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Ignatov, D.; Zhurbina, N.; Gerasimenko, A.

    2017-01-01

    3-D composites are widely used in tissue engineering. A comprehensive analysis by X-ray microtomography was conducted to study the structure of the 3-D composites. Comprehensive analysis of the structure of the 3-D composites consisted of scanning, image reconstruction of shadow projections, two-dimensional and three-dimensional visualization of the reconstructed images and quantitative analysis of the samples. Experimental samples of composites were formed by laser vaporization of the aqueous dispersion BSA and single-walled (SWCNTs) and multi-layer (MWCNTs) carbon nanotubes. The samples have a homogeneous structure over the entire volume, the percentage of porosity of 3-D composites based on SWCNTs and MWCNTs - 16.44%, 28.31%, respectively. An average pore diameter of 3-D composites based on SWCNTs and MWCNTs - 45 μm 93 μm. 3-D composites based on carbon nanotubes in bovine serum albumin matrix can be used in tissue engineering of bone and cartilage, providing cell proliferation and blood vessel sprouting.

  6. Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields.

    PubMed

    Stevens, Andrew H; Butkiewicz, Thomas; Ware, Colin

    2017-01-01

    Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.

  7. Journey to the centre of the cell: Virtual reality immersion into scientific data.

    PubMed

    Johnston, Angus P R; Rae, James; Ariotti, Nicholas; Bailey, Benjamin; Lilja, Andrew; Webb, Robyn; Ferguson, Charles; Maher, Sheryl; Davis, Thomas P; Webb, Richard I; McGhee, John; Parton, Robert G

    2018-02-01

    Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a "real" cell. Early testing of this immersive environment indicates a significant improvement in students' understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    PubMed

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  9. Navigation-aided visualization of lumbosacral nerves for anterior sacroiliac plate fixation: a case report.

    PubMed

    Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Sugano, Nobuhiko

    2014-06-01

    Anterior sacroiliac joint plate fixation for unstable pelvic ring fractures avoids soft tissue problems in the buttocks; however, the lumbosacral nerves lie in close proximity to the sacroiliac joint and may be injured during the procedure. A 49 year-old woman with a type C pelvic ring fracture was treated with an anterior sacroiliac plate using a computed tomography (CT)-three-dimensional (3D)-fluoroscopy matching navigation system, which visualized the lumbosacral nerves as well as the iliac and sacral bones. We used a flat panel detector 3D C-arm, which made it possible to superimpose our preoperative CT-based plan on the intra-operative 3D-fluoroscopic images. No postoperative complications were noted. Intra-operative lumbosacral nerve visualization using computer navigation was useful to recognize the 'at-risk' area for nerve injury during anterior sacroiliac plate fixation. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  11. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  12. Disentangling the intragroup HI in Compact Groups of galaxies by means of X3D visualization

    NASA Astrophysics Data System (ADS)

    Verdes-Montenegro, Lourdes; Vogt, Frederic; Aubery, Claire; Duret, Laetitie; Garrido, Julián; Sánchez, Susana; Yun, Min S.; Borthakur, Sanchayeeta; Hess, Kelley; Cluver, Michelle; Del Olmo, Ascensión; Perea, Jaime

    2017-03-01

    As an extreme kind of environment, Hickson Compact groups (HCGs) have shown to be very complex systems. HI-VLA observations revealed an intrincated network of HI tails and bridges, tracing pre-processing through extreme tidal interactions. We found HCGs to show a large HI deficiency supporting an evolutionary sequence where gas-rich groups transform via tidal interactions and ISM (interstellar medium) stripping into gas-poor systems. We detected as well a diffuse HI component in the groups, increasing with evolutionary phase, although with uncertain distribution. The complex net of detected HI as observed with the VLA seems hence so puzzling as the missing one. In this talk we revisit the existing VLA information on the HI distribution and kinematics of HCGs by means of X3D visualization. X3D constitutes a powerful tool to extract the most from HI data cubes and a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3-D) diagrams.

  13. Discussion on the 3D visualizing of 1:200 000 geological map

    NASA Astrophysics Data System (ADS)

    Wang, Xiaopeng

    2018-01-01

    Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.

  14. Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.

    PubMed

    Okoshi, T; Oshima, K

    1976-04-01

    In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.

  15. Documentation and analysis of traumatic injuries in clinical forensic medicine involving structured light three-dimensional surface scanning versus photography.

    PubMed

    Shamata, Awatif; Thompson, Tim

    2018-05-10

    Non-contact three-dimensional (3D) surface scanning has been applied in forensic medicine and has been shown to mitigate shortcoming of traditional documentation methods. The aim of this paper is to assess the efficiency of structured light 3D surface scanning in recording traumatic injuries of live cases in clinical forensic medicine. The work was conducted in Medico-Legal Centre in Benghazi, Libya. A structured light 3D surface scanner and ordinary digital camera with close-up lens were used to record the injuries and to have 3D and two-dimensional (2D) documents of the same traumas. Two different types of comparison were performed. Firstly, the 3D wound documents were compared to 2D documents based on subjective visual assessment. Additionally, 3D wound measurements were compared to conventional measurements and this was done to determine whether there was a statistical significant difference between them. For this, Friedman test was used. The study established that the 3D wound documents had extra features over the 2D documents. Moreover; the 3D scanning method was able to overcome the main deficiencies of the digital photography. No statistically significant difference was found between the 3D and conventional wound measurements. The Spearman's correlation established strong, positive correlation between the 3D and conventional measurement methods. Although, the 3D surface scanning of the injuries of the live subjects faced some difficulties, the 3D results were appreciated, the validity of 3D measurements based on the structured light 3D scanning was established. Further work will be achieved in forensic pathology to scan open injuries with depth information. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  16. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  17. The Generation of Novel MR Imaging Techniques to Visualize Inflammatory/Degenerative Mechanisms and the Correlation of MR Data with 3D Microscopic Changes

    DTIC Science & Technology

    2012-09-01

    structures that are impossible with current methods . Using techniques to concurrently stain and three-dimensionally analyze many cell types and...new methods allowed us to visualize structures in these damaged samples that were not visible using conventional techniques allowing us modify our...AWARD NUMBER: W81XWH-11-1-0705 TITLE: The Generation of Novel MR Imaging Techniques to

  18. Graphics and Flow Visualization of Computer Generated Flow Fields

    NASA Technical Reports Server (NTRS)

    Kathong, M.; Tiwari, S. N.

    1987-01-01

    Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.

  19. Photogrammetry of the three-dimensional shape and texture of a nanoscale particle using scanning electron microscopy and free software.

    PubMed

    Gontard, Lionel C; Schierholz, Roland; Yu, Shicheng; Cintas, Jesús; Dunin-Borkowski, Rafal E

    2016-10-01

    We apply photogrammetry in a scanning electron microscope (SEM) to study the three-dimensional shape and surface texture of a nanoscale LiTi2(PO4)3 particle. We highlight the fact that the technique can be applied non-invasively in any SEM using free software (freeware) and does not require special sample preparation. Three-dimensional information is obtained in the form of a surface mesh, with the texture of the sample stored as a separate two-dimensional image (referred to as a UV Map). The mesh can be used to measure parameters such as surface area, volume, moment of inertia and center of mass, while the UV map can be used to study the surface texture using conventional image processing techniques. We also illustrate the use of 3D printing to visualize the reconstructed model. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azpiroz, J.; Krafft, J.; Cadena, M.

    2006-09-08

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualizationmore » allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.« less

  1. Satisfactory rate of post-processing visualization of fetal cerebral axial, sagittal, and coronal planes from three-dimensional volumes acquired in routine second trimester ultrasound practice by sonographers of peripheral centers.

    PubMed

    Rizzo, Giuseppe; Pietrolucci, Maria Elena; Capece, Giuseppe; Cimmino, Ernesto; Colosi, Enrico; Ferrentino, Salvatore; Sica, Carmine; Di Meglio, Aniello; Arduini, Domenico

    2011-08-01

    The aim of this study was to evaluate the feasibility to visualize central nervous system (CNS) diagnostic planes from three-dimensional (3D) brain volumes obtained in ultrasound facilities with no specific experience in fetal neurosonography. Five sonographers prospectively recorded transabdominal 3D CNS volumes starting from an axial approach on 500 consecutive pregnancies at 19-24 weeks of gestation undergoing routine ultrasound examination. Volumes were sent to the referral center (Department of Obstetrics and Gynecology, Università Roma Tor Vergata, Italy) and two independent reviewers with experience in 3D ultrasound assessed their quality in the display of axial, coronal, and sagittal planes. CNS volumes were acquired in 491/500 pregnancies (98.2%). The two reviewers acknowledged the presence of satisfactory images with a visualization rate ranging respectively between 95.1% and 97.14% for axial planes, 73.72% and 87.16% for coronal planes, and 78.41% and 94.29% for sagittal planes. The agreement rate between the two reviewers as expressed by Cohen's kappa coefficient was >0.87 for axial planes, >0.89 for coronal planes, and >0.94 for sagittal planes. The presence of a maternal body mass index >30 alters the probability of achieving satisfactory CNS views, while existence of previous maternal lower abdomen surgery does not affect the quality of the reconstructed planes. CNS volumes acquired by 3D ultrasonography in peripheral centers showed a quality high enough to allow a detailed fetal neurosonogram.

  2. Tumor resection at the pelvis using three-dimensional planning and patient-specific instruments: a case series.

    PubMed

    Jentzsch, Thorsten; Vlachopoulos, Lazaros; Fürnstahl, Philipp; Müller, Daniel A; Fuchs, Bruno

    2016-09-21

    Sarcomas are associated with a relatively high local recurrence rate of around 30 % in the pelvis. Inadequate surgical margins are the most important reason. However, obtaining adequate margins is particularly difficult in this anatomically demanding region. Recently, three-dimensional (3-D) planning, printed models, and patient-specific instruments (PSI) with cutting blocks have been introduced to improve the precision during surgical tumor resection. This case series illustrates these modern 3-D tools in pelvic tumor surgery. The first consecutive patients with 3-D-planned tumor resection around the pelvis were included in this retrospective study at a University Hospital in 2015. Detailed information about the clinical presentation, imaging techniques, preoperative planning, intraoperative surgical procedures, and postoperative evaluation is provided for each case. The primary outcome was tumor-free resection margins as assessed by a postoperative computed tomography (CT) scan of the specimen. The secondary outcomes were precision of preoperative planning and complications. Four patients with pelvic sarcomas were included in this study. The mean follow-up was 7.8 (range, 6.0-9.0) months. The combined use of preoperative planning with 3-D techniques, 3-D-printed models, and PSI for osteotomies led to higher precision (maximal (max) error of 0.4 centimeters (cm)) than conventional 3-D planning and freehand osteotomies (max error of 2.8 cm). Tumor-free margins were obtained where measurable (n = 3; margins were not assessable in a patient with curettage). Two insufficiency fractures were noted postoperatively. Three-dimensional planning as well as the intraoperative use of 3-D-printed models and PSI are valuable for complex sarcoma resection at the pelvis. Three-dimensionally printed models of the patient anatomy may help visualization and precision. PSI with cutting blocks help perform very precise osteotomies for adequate resection margins.

  3. The Relationship Between Human Nucleolar Organizer Regions and Nucleoli, Probed by 3D-ImmunoFISH.

    PubMed

    van Sluis, Marjolein; van Vuuren, Chelly; McStay, Brian

    2016-01-01

    3D-immunoFISH is a valuable technique to compare the localization of DNA sequences and proteins in cells where three-dimensional structure has been preserved. As nucleoli contain a multitude of protein factors dedicated to ribosome biogenesis and form around specific chromosomal loci, 3D-immunoFISH is a particularly relevant technique for their study. In human cells, nucleoli form around transcriptionally active ribosomal gene (rDNA) arrays termed nucleolar organizer regions (NORs) positioned on the p-arms of each of the acrocentric chromosomes. Here, we provide a protocol for fixing and permeabilizing human cells grown on microscope slides such that nucleolar proteins can be visualized using antibodies and NORs visualized by DNA FISH. Antibodies against UBF recognize transcriptionally active rDNA/NORs and NOP52 antibodies provide a convenient way of visualizing the nucleolar volume. We describe a probe designed to visualize rDNA and introduce a probe comprised of NOR distal sequences, which can be used to identify or count individual NORs.

  4. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  5. Three-Dimensional Visualization with Large Data Sets: A Simulation of Spreading Cortical Depression in Human Brain

    PubMed Central

    Ertürk, Korhan Levent; Şengül, Gökhan

    2012-01-01

    We developed 3D simulation software of human organs/tissues; we developed a database to store the related data, a data management system to manage the created data, and a metadata system for the management of data. This approach provides two benefits: first of all the developed system does not require to keep the patient's/subject's medical images on the system, providing less memory usage. Besides the system also provides 3D simulation and modification options, which will help clinicians to use necessary tools for visualization and modification operations. The developed system is tested in a case study, in which a 3D human brain model is created and simulated from 2D MRI images of a human brain, and we extended the 3D model to include the spreading cortical depression (SCD) wave front, which is an electrical phoneme that is believed to cause the migraine. PMID:23258956

  6. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments

    PubMed Central

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research. PMID:29618999

  7. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.

    PubMed

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.

  8. Explore the virtual side of earth science

    USGS Publications Warehouse

    ,

    1998-01-01

    Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).

  9. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  10. A Fast 3-Dimensional Magnetic Resonance Imaging Reconstruction for Surgical Planning of Uterine Myomectomy

    PubMed Central

    2017-01-01

    Background Uterine myoma is the most common benign gynecologic tumor in reproductive-aged women. During myomectomy for women who want to preserve fertility, it is advisable to detect and remove all myomas to decrease the risk of additional surgery. However, finding myomas during surgery is often challenging, especially for deep-seated myomas. Therefore, three-dimensional (3D) preoperative localization of myomas can be helpful for the surgical planning for myomectomy. However, the previously reported manual 3D segmenting method takes too much time and effort for clinical use. The objective of this study was to propose a new method of rapid 3D visualization of uterine myoma using a uterine template. Methods Magnetic resonance images were listed according to the slide spacing on each plane of the multiplanar reconstruction, and images that were determined to be myomas were selected by simply scrolling the mouse down. By using the selected images, a 3D grid with a slide spacing interval was constructed and filled on its plane and finally registered to a uterine template. Results The location of multiple myomas in the uterus was visualized in 3D and this proposed method is over 95% faster than the existing manual-segmentation method. Not only the size and location of the myomas, but also the shortest distance between the uterine surface and the myomas, can be calculated. This technique also enables the surgeon to know the number of total, removed, and remaining myomas on the 3D image. Conclusion This proposed 3D reconstruction method with a uterine template enables faster 3D visualization of myomas. PMID:29215821

  11. Diagnosis of the prosthetic heart valve pannus formation with real-time three-dimensional transoesophageal echocardiography.

    PubMed

    Ozkan, Mehmet; Gündüz, Sabahattin; Yildiz, Mustafa; Duran, Nilüfer Eksi

    2010-05-01

    Prosthetic heart valve obstruction (PHVO) caused by pannus formation is an uncommon but serious complication. Although two-dimensional transesophageal echocardiography (2D-TEE) is the method of choice in the evaluation of PHVO, visualization of pannus is almost impossible with 2D-TEE. While demonstrating the precise aetiology of PHVO is essential for guiding the therapy, either thrombolysis for valve thrombosis or surgery for pannus formation, more sophisticated imaging techniques are needed in patients with suspected pannus formation. We present real-time 3D-TEE imaging in a patient with mechanical mitral PHVO, clearly demonstrating pannus overgrowth.

  12. Additive manufacturing of three-dimensional (3D) microfluidic-based microelectromechanical systems (MEMS) for acoustofluidic applications.

    PubMed

    Cesewski, Ellen; Haring, Alexander P; Tong, Yuxin; Singh, Manjot; Thakur, Rajan; Laheri, Sahil; Read, Kaitlin A; Powell, Michael D; Oestreich, Kenneth J; Johnson, Blake N

    2018-06-13

    Three-dimensional (3D) printing now enables the fabrication of 3D structural electronics and microfluidics. Further, conventional subtractive manufacturing processes for microelectromechanical systems (MEMS) relatively limit device structure to two dimensions and require post-processing steps for interface with microfluidics. Thus, the objective of this work is to create an additive manufacturing approach for fabrication of 3D microfluidic-based MEMS devices that enables 3D configurations of electromechanical systems and simultaneous integration of microfluidics. Here, we demonstrate the ability to fabricate microfluidic-based acoustofluidic devices that contain orthogonal out-of-plane piezoelectric sensors and actuators using additive manufacturing. The devices were fabricated using a microextrusion 3D printing system that contained integrated pick-and-place functionality. Additively assembled materials and components included 3D printed epoxy, polydimethylsiloxane (PDMS), silver nanoparticles, and eutectic gallium-indium as well as robotically embedded piezoelectric chips (lead zirconate titanate (PZT)). Electrical impedance spectroscopy and finite element modeling studies showed the embedded PZT chips exhibited multiple resonant modes of varying mode shape over the 0-20 MHz frequency range. Flow visualization studies using neutrally buoyant particles (diameter = 0.8-70 μm) confirmed the 3D printed devices generated bulk acoustic waves (BAWs) capable of size-selective manipulation, trapping, and separation of suspended particles in droplets and microchannels. Flow visualization studies in a continuous flow format showed suspended particles could be moved toward or away from the walls of microfluidic channels based on selective actuation of in-plane or out-of-plane PZT chips. This work suggests additive manufacturing potentially provides new opportunities for the design and fabrication of acoustofluidic and microfluidic devices.

  13. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  14. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  15. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  16. Three-dimensional model of the skull and the cranial bones reconstructed from CT scans designed for rapid prototyping process.

    PubMed

    Skrzat, Janusz; Spulber, Alexandru; Walocha, Jerzy

    This paper presents the effects of building mesh models of the human skull and the cranial bones from a series of CT-scans. With the aid of computer so ware, 3D reconstructions of the whole skull and segmented cranial bones were performed and visualized by surface rendering techniques. The article briefly discusses clinical and educational applications of 3D cranial models created using stereolitographic reproduction.

  17. Visualization tool for three-dimensional plasma velocity distributions (ISEE_3D) as a plug-in for SPEDAS

    NASA Astrophysics Data System (ADS)

    Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron

    2017-12-01

    This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.

  18. Pseudohaptic interaction with knot diagrams

    NASA Astrophysics Data System (ADS)

    Weng, Jianguang; Zhang, Hui

    2012-07-01

    To make progress in understanding knot theory, we need to interact with the projected representations of mathematical knots, which are continuous in three dimensions (3-D) but significantly interrupted in the projective images. One way to achieve such a goal is to design an interactive system that allows us to sketch two-dimensional (2-D) knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress in this direction. Pseudohaptics that simulate haptic effects using pure visual feedback can be used to develop such an interactive system. We outline one such pseudohaptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2-D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a physically reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudohaptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of which the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudohaptic four-dimensional (4-D) visualization system that simulates the continuous navigation on 4-D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2-D knot diagrams of 3-D knots and 3-D projective images of 4-D mathematical objects.

  19. Quantitative 3D electromagnetic field determination of 1D nanostructures from single projection

    DOE PAGES

    Phatak, C.; Knoop, L. de; Houdellier, F.; ...

    2016-05-01

    One-dimensional (1D) nanostructures have been regarded as the most promising building blocks for nanoelectronics and nanocomposite material systems as well as for alternative energy applications. Although they result in confinement of a material, their properties and interactions with other nanostructures are still very much three-dimensional (3D) in nature. In this work, we present a novel method for quantitative determination of the 3D electromagnetic fields in and around 1D nanostructures using a single electron wave phase image, thereby eliminating the cumbersome acquisition of tomographic data. Using symmetry arguments, we have reconstructed the 3D magnetic field of a nickel nanowire as wellmore » as the 3D electric field around a carbon nanotube field emitter, from one single projection. The accuracy of quantitative values determined here is shown to be a better fit to the physics at play than the value obtained by conventional analysis. Moreover the 3D reconstructions can then directly be visualized and used in the design of functional 3D architectures built using 1D nanostructures.« less

  20. Quantitative 3D electromagnetic field determination of 1D nanostructures from single projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phatak, C.; Knoop, L. de; Houdellier, F.

    One-dimensional (1D) nanostructures have been regarded as the most promising building blocks for nanoelectronics and nanocomposite material systems as well as for alternative energy applications. Although they result in confinement of a material, their properties and interactions with other nanostructures are still very much three-dimensional (3D) in nature. In this work, we present a novel method for quantitative determination of the 3D electromagnetic fields in and around 1D nanostructures using a single electron wave phase image, thereby eliminating the cumbersome acquisition of tomographic data. Using symmetry arguments, we have reconstructed the 3D magnetic field of a nickel nanowire as wellmore » as the 3D electric field around a carbon nanotube field emitter, from one single projection. The accuracy of quantitative values determined here is shown to be a better fit to the physics at play than the value obtained by conventional analysis. Moreover the 3D reconstructions can then directly be visualized and used in the design of functional 3D architectures built using 1D nanostructures.« less

  1. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Theis, C.; Buchegger, K. H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-06-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems.

  2. Analogous Three-Dimensional Constructive Interference in Steady State Sequences Enhance the Utility of Three-Dimensional Time of Flight Magnetic Resonance Angiography in Delineating Lenticulostriate Arteries in Insular Gliomas: Evidence from a Prospective Clinicoradiologic Analysis of 48 Patients.

    PubMed

    Rao, Arun S; Thakar, Sumit; Sai Kiran, Narayanam Anantha; Aryan, Saritha; Mohan, Dilip; Hegde, Alangar S

    2018-01-01

    Three-dimensional (3D) time of flight (TOF) imaging is the current gold standard for noninvasive, preoperative localization of lenticulostriate arteries (LSAs) in insular gliomas; however, the utility of this modality depends on tumor intensity. Over a 3-year period, 48 consecutive patients with insular gliomas were prospectively evaluated. Location of LSAs and their relationship with the tumor were determined using a combination of contrast-enhanced coronal 3D TOF magnetic resonance angiography and coronal 3D constructive interference in steady state (CISS) sequences. These findings were analyzed with respect to extent of tumor resection and early postoperative motor outcome. Tumor was clearly visualized in 29 (60.4%) patients with T1-hypointense tumors using 3D TOF and in all patients using CISS sequences. Using combined 3D TOF and CISS, LSA-tumor interface was well seen in 47 patients, including all patients with T1-heterointense or T1-isointense tumors. Extent of resection was higher in the LSA-pushed group compared with the LSA-encased group. In the LSA-encased group, 6 (12.5%) patients developed postoperative hemiparesis; 2 (4.2%) cases were attributed to LSA injury. Contrast-enhanced 3D TOF can delineate LSAs in almost all insular gliomas but is limited in identifying the LSA-tumor interface. This limitation can be overcome by addition of analogous CISS sequences that delineate the LSA-tumor interface regardless of tumor intensity. Combined 3D TOF and 3D CISS is a useful tool for surgical planning and safer resections of insular tumors and may have added surgical relevance when included as an intraoperative adjunct. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. 3D-MSCT imaging of bullet trajectory in 3D crime scene reconstruction: two case reports.

    PubMed

    Colard, T; Delannoy, Y; Bresson, F; Marechal, C; Raul, J S; Hedouin, V

    2013-11-01

    Postmortem investigations are increasingly assisted by three-dimensional multi-slice computed tomography (3D-MSCT) and have become more available to forensic pathologists over the past 20years. In cases of ballistic wounds, 3D-MSCT can provide an accurate description of the bullet location, bone fractures and, more interestingly, a clear visual of the intracorporeal trajectory (bullet track). These forensic medical examinations can be combined with tridimensional bullet trajectory reconstructions created by forensic ballistic experts. These case reports present the implementation of tridimensional methods and the results of 3D crime scene reconstruction in two cases. The authors highlight the value of collaborations between police forensic experts and forensic medicine institutes through the incorporation of 3D-MSCT data in a crime scene reconstruction, which is of great interest in forensic science as a clear visual communication tool between experts and the court. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  5. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  6. Factors Influencing Undergraduate Students' Acceptance of a Haptic Interface for Learning Gross Anatomy

    ERIC Educational Resources Information Center

    Yeom, Soonja; Choi-Lundberg, Derek L.; Fluck, Andrew Edward; Sale, Arthur

    2017-01-01

    Purpose: This study aims to evaluate factors influencing undergraduate students' acceptance of a computer-aided learning resource using the Phantom Omni haptic stylus to enable rotation, touch and kinaesthetic feedback and display of names of three-dimensional (3D) human anatomical structures on a visual display. Design/methodology/approach: The…

  7. Spatial Cognition Support for Exploring the Design Mechanics of Building Structures

    ERIC Educational Resources Information Center

    Rudy, Margit; Hauck, Richard

    2008-01-01

    A web-based tool for visualizing the simulated structural behavior of building models was developed to support the teaching of structural design to architecture and engineering students by activating their spatial cognition capabilities. The main didactic issues involved establishing a consistent and complete three-dimensional vocabulary (3D)…

  8. Visualization of Metal Ion Buffering via Three-Dimensional Topographic Surfaces (Topos) of Complexometric Titrations

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul

    2017-01-01

    "Complexation TOPOS" is a free software package to generate 3-D topographic surfaces ("topos") for metal-ligand complexometric titrations in aqueous media. It constructs surfaces by plotting computed equilibrium parameters above a composition grid with "volume of ligand added" as the x-axis and overall system dilution…

  9. Current State-of-the-Art 3D Tissue Models and Their Compatibility with Live Cell Imaging.

    PubMed

    Bardsley, Katie; Deegan, Anthony J; El Haj, Alicia; Yang, Ying

    2017-01-01

    Mammalian cells grow within a complex three-dimensional (3D) microenvironment where multiple cells are organized and surrounded by extracellular matrix (ECM). The quantity and types of ECM components, alongside cell-to-cell and cell-to-matrix interactions dictate cellular differentiation, proliferation and function in vivo. To mimic natural cellular activities, various 3D tissue culture models have been established to replace conventional two dimensional (2D) culture environments. Allowing for both characterization and visualization of cellular activities within possibly bulky 3D tissue models presents considerable challenges due to the increased thickness and subsequent light scattering features of such 3D models. In this chapter, state-of-the-art methodologies used to establish 3D tissue models are discussed, first with a focus on both scaffold-free and scaffold-based 3D tissue model formation. Following on, multiple 3D live cell imaging systems, mainly optical imaging modalities, are introduced. Their advantages and disadvantages are discussed, with the aim of stimulating more research in this highly demanding research area.

  10. Dissection of C. elegans behavioral genetics in 3-D environments

    PubMed Central

    Kwon, Namseop; Hwang, Ara B.; You, Young-Jai; V. Lee, Seung-Jae; Ho Je, Jung

    2015-01-01

    The nematode Caenorhabditis elegans is a widely used model for genetic dissection of animal behaviors. Despite extensive technical advances in imaging methods, it remains challenging to visualize and quantify C. elegans behaviors in three-dimensional (3-D) natural environments. Here we developed an innovative 3-D imaging method that enables quantification of C. elegans behavior in 3-D environments. Furthermore, for the first time, we characterized 3-D-specific behavioral phenotypes of mutant worms that have defects in head movement or mechanosensation. This approach allowed us to reveal previously unknown functions of genes in behavioral regulation. We expect that our 3-D imaging method will facilitate new investigations into genetic basis of animal behaviors in natural 3-D environments. PMID:25955271

  11. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  12. Three-dimensional evidence network plot system: covariate imbalances and effects in network meta-analysis explored using a new software tool.

    PubMed

    Batson, Sarah; Score, Robert; Sutton, Alex J

    2017-06-01

    The aim of the study was to develop the three-dimensional (3D) evidence network plot system-a novel web-based interactive 3D tool to facilitate the visualization and exploration of covariate distributions and imbalances across evidence networks for network meta-analysis (NMA). We developed the 3D evidence network plot system within an AngularJS environment using a third party JavaScript library (Three.js) to create the 3D element of the application. Data used to enable the creation of the 3D element for a particular topic are inputted via a Microsoft Excel template spreadsheet that has been specifically formatted to hold these data. We display and discuss the findings of applying the tool to two NMA examples considering multiple covariates. These two examples have been previously identified as having potentially important covariate effects and allow us to document the various features of the tool while illustrating how it can be used. The 3D evidence network plot system provides an immediate, intuitive, and accessible way to assess the similarity and differences between the values of covariates for individual studies within and between each treatment contrast in an evidence network. In this way, differences between the studies, which may invalidate the usual assumptions of an NMA, can be identified for further scrutiny. Hence, the tool facilitates NMA feasibility/validity assessments and aids in the interpretation of NMA results. The 3D evidence network plot system is the first tool designed specifically to visualize covariate distributions and imbalances across evidence networks in 3D. This will be of primary interest to systematic review and meta-analysis researchers and, more generally, those assessing the validity and robustness of an NMA to inform reimbursement decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. U.S. Geological Survey: A synopsis of Three-dimensional Modeling

    USGS Publications Warehouse

    Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.

    2011-01-01

    The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.

  14. Evaluation of three-dimensional computed tomography processing for deep inferior epigastric perforator flap breast reconstruction.

    PubMed

    Teoh, Raymond; Johnson, Raleigh F; Nishino, Thomas K; Ethridge, Richard T

    2007-01-01

    The deep inferior epigastric perforator flap procedure has become a popular alternative for women who require breast reconstruction. One of the difficulties with this procedure is identifying perforator arteries large enough to ensure that the harvested tissue is well vascularized. Current techniques involve imaging the perforator arteries with computed tomography (CT) to produce a grid mapping the locations of the perforator arteries relative to the umbilicus. To compare the time it takes to produce a map of the perforators using either two-dimensional (2D) or three-dimensional (3D) CT, and to see whether there is a benefit in using a 3D model. Patient CT abdomen and pelvis scans were acquired from a GE 64-slice scanner. CT image processing was performed with the GE 3D Advantage Workstation v4.2 software. Maps of the perforators were generated both as 2D and 3D representations. Perforators within a region 5 cm rostral and 7 cm caudal to the umbilicus were measured and the times to perform these measurements using both 2D and 3D images were recorded by a stopwatch. Although the 3D method took longer than the 2D method (mean [+/- SD] time 1:51+/-0:35 min versus 1:08+/-0:16 min per perforator artery, respectively), producing a 3D image provides much more information than the 2D images alone. Additionally, an actual-sized 3D image can be printed out, removing the need to make measurements and producing a grid. Although it took less time to create a grid of the perforators using 2D axial CT scans, the 3D reconstruction of the abdomen allows the plastic surgeons to better visualize the patient's anatomy and has definite clinical utility.

  15. Applicability of three-dimensional imaging techniques in fetal medicine*

    PubMed Central

    Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson

    2016-01-01

    Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540

  16. Application of an object-oriented programming paradigm in three-dimensional computer modeling of mechanically active gastrointestinal tissues.

    PubMed

    Rashev, P Z; Mintchev, M P; Bowes, K L

    2000-09-01

    The aim of this study was to develop a novel three-dimensional (3-D) object-oriented modeling approach incorporating knowledge of the anatomy, electrophysiology, and mechanics of externally stimulated excitable gastrointestinal (GI) tissues and emphasizing the "stimulus-response" principle of extracting the modeling parameters. The modeling method used clusters of class hierarchies representing GI tissues from three perspectives: 1) anatomical; 2) electrophysiological; and 3) mechanical. We elaborated on the first four phases of the object-oriented system development life-cycle: 1) analysis; 2) design; 3) implementation; and 4) testing. Generalized cylinders were used for the implementation of 3-D tissue objects modeling the cecum, the descending colon, and the colonic circular smooth muscle tissue. The model was tested using external neural electrical tissue excitation of the descending colon with virtual implanted electrodes and the stimulating current density distributions over the modeled surfaces were calculated. Finally, the tissue deformations invoked by electrical stimulation were estimated and represented by a mesh-surface visualization technique.

  17. Visualizing 3D data obtained from microscopy on the Internet.

    PubMed

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  18. Current automated 3D cell detection methods are not a suitable replacement for manual stereologic cell counting

    PubMed Central

    Schmitz, Christoph; Eastwood, Brian S.; Tappan, Susan J.; Glaser, Jack R.; Peterson, Daniel A.; Hof, Patrick R.

    2014-01-01

    Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D) stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D) “cell counting” approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38 and 99% and false-positive rates from 3.6 to 82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections. PMID:24847213

  19. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  20. 3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph.

    PubMed

    Xu, Wei-Hai; Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie; Wu, Jian-Huang

    2014-08-01

    Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 'print' the segments of intracranial arteries based on magnetic resonance imaging. Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 µm. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Seven responders marked "grade 1" to 3D printing results, while one marked "grade 4". Therefore, 87.5% of the clinicians considered the 3D printing were successful. Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice.

  1. Three-dimensional entertainment as a novel cause of takotsubo cardiomyopathy.

    PubMed

    Taylor, Montoya; Amin, Anish; Bush, Charles

    2011-11-01

    Takotsubo cardiomyopathy (TC) is an uncommon entity. It is known to occur in the setting of extreme catecholamine release and results in left ventricular dysfunction without evidence of angiographically definable coronary artery disease. There have been no published reports of TC occurring with visual stimuli, specifically 3-dimensional (3D) entertainment. We present a 55-year-old woman who presented to her primary care physician's office with extreme palpitations, nausea, vomiting, and malaise <48 hours after watching a 3D action movie at her local theater. Her electrocardiogram demonstrated ST elevations in aVL and V1, prolonged QTc interval, and T-wave inversions in leads I, II, aVL, and V2-V6. Coronary angiography revealed angiographically normal vessels, elevated left ventricular filling pressures, and decreased ejection fraction with a pattern of apical ballooning. The presumed final diagnosis was TC, likely due to visual-auditory-triggered catecholamine release causing impaired coronary microcirculation. © 2011 Wiley Periodicals, Inc.

  2. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  3. In Vitro Validation of Real-Time Three-Dimensional Color Doppler Echocardiography for Direct Measurement of Proximal Isovelocity Surface Area in Mitral Regurgitation

    PubMed Central

    Little, Stephen H.; Igo, Stephen R.; Pirat, Bahar; McCulloch, Marti; Hartley, Craig J.; Nosé, Yukihiko; Zoghbi, William A.

    2012-01-01

    The 2-dimensional (2D) color Doppler (2D-CD) proximal isovelocity surface area (PISA) method assumes a hemispheric flow convergence zone to estimate transvalvular flow. Recently developed 3-dimensional (3D)-CD can directly visualize PISA shape and surface area without geometric assumptions. To validate a novel method to directly measure PISA using real-time 3D-CD echocardiography, a circulatory loop with an ultrasound imaging chamber was created to model mitral regurgitation (MR). Thirty-two different regurgitant flow conditions were tested using symmetric and asymmetric flow orifices. Three-dimensional–PISA was reconstructed from a hand-held real-time 3D-CD data set. Regurgitant volume was derived using both 2D-CD and 3D-CD PISA methods, and each was compared against a flowmeter standard. The circulatory loop achieved regurgitant volume within the clinical range of MR (11 to 84 ml). Three-dimensional–PISA geometry reflected the 2D geometry of the regurgitant orifice. Correlation between the 2D-PISA method regurgitant volume and actual regurgitant volume was significant (r2 = 0.47, p <0.001). Mean 2D-PISA regurgitant volume underestimate was 19.1 ± 25 ml (2 SDs). For the 3D-PISA method, correlation with actual regurgitant volume was significant (r2 = 0.92, p <0.001), with a mean regurgitant volume underestimate of 2.7 ± 10 ml (2 SDs). The 3D-PISA method showed less regurgitant volume underestimation for all orifice shapes and regurgitant volumes tested. In conclusion, in an in vitro model of MR, 3D-CD was used to directly measure PISA without geometric assumption. Compared with conventional 2D-PISA, regurgitant volume was more accurate when derived from 3D-PISA across symmetric and asymmetric orifices within a broad range of hemodynamic flow conditions. PMID:17493476

  4. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...free space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...generates and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a

  5. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation

  6. An Agent Based Collaborative Simplification of 3D Mesh Model

    NASA Astrophysics Data System (ADS)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  7. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  8. StructMap: Elastic Distance Analysis of Electron Microscopy Maps for Studying Conformational Changes.

    PubMed

    Sanchez Sorzano, Carlos Oscar; Alvarez-Cabrera, Ana Lucia; Kazemi, Mohsen; Carazo, Jose María; Jonić, Slavica

    2016-04-26

    Single-particle electron microscopy (EM) has been shown to be very powerful for studying structures and associated conformational changes of macromolecular complexes. In the context of analyzing conformational changes of complexes, distinct EM density maps obtained by image analysis and three-dimensional (3D) reconstruction are usually analyzed in 3D for interpretation of structural differences. However, graphic visualization of these differences based on a quantitative analysis of elastic transformations (deformations) among density maps has not been done yet due to a lack of appropriate methods. Here, we present an approach that allows such visualization. This approach is based on statistical analysis of distances among elastically aligned pairs of EM maps (one map is deformed to fit the other map), and results in visualizing EM maps as points in a lower-dimensional distance space. The distances among points in the new space can be analyzed in terms of clusters or trajectories of points related to potential conformational changes. The results of the method are shown with synthetic and experimental EM maps at different resolutions. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Three-dimensional visualization of the microvasculature of bile duct ligation-induced liver fibrosis in rats by x-ray phase-contrast imaging computed tomography

    NASA Astrophysics Data System (ADS)

    Xuan, Ruijiao; Zhao, Xinyan; Hu, Doudou; Jian, Jianbo; Wang, Tailing; Hu, Chunhong

    2015-07-01

    X-ray phase-contrast imaging (PCI) can substantially enhance contrast, and is particularly useful in differentiating biological soft tissues with small density differences. Combined with computed tomography (CT), PCI-CT enables the acquisition of accurate microstructures inside biological samples. In this study, liver microvasculature was visualized without contrast agents in vitro with PCI-CT using liver fibrosis samples induced by bile duct ligation (BDL) in rats. The histological section examination confirmed the correspondence of CT images with the microvascular morphology of the samples. By means of the PCI-CT and three-dimensional (3D) visualization technique, 3D microvascular structures in samples from different stages of liver fibrosis were clearly revealed. Different types of blood vessels, including portal veins and hepatic veins, in addition to ductular proliferation and bile ducts, could be distinguished with good sensitivity, excellent specificity and excellent accuracy. The study showed that PCI-CT could assess the morphological changes in liver microvasculature that result from fibrosis and allow characterization of the anatomical and pathological features of the microvasculature. With further development of PCI-CT technique, it may become a novel noninvasive imaging technique for the auxiliary analysis of liver fibrosis.

  10. Distributed augmented reality with 3-D lung dynamics--a planning tool concept.

    PubMed

    Hamza-Lup, Felix G; Santhanam, Anand P; Imielińska, Celina; Meeks, Sanford L; Rolland, Jannick P

    2007-01-01

    Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patient's body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype.

  11. MODFLOW-2000, the U.S. Geological Survey modular ground-water model : user guide to the LMT6 package, the linkage with MT3DMS for multi-species mass transport modeling

    USGS Publications Warehouse

    Zheng, Chunmiao; Hill, Mary Catherine; Hsieh, Paul A.

    2001-01-01

    MODFLOW-2000, the newest version of MODFLOW, is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium using a finite-difference method. MT3DMS, the successor to MT3D, is a computer program for modeling multi-species solute transport in three-dimensional ground-water systems using multiple solution techniques, including the finite-difference method, the method of characteristics (MOC), and the total-variation-diminishing (TVD) method. This report documents a new version of the Link-MT3DMS Package, which enables MODFLOW-2000 to produce the information needed by MT3DMS, and also discusses new visualization software for MT3DMS. Unlike the Link-MT3D Packages that coordinated previous versions of MODFLOW and MT3D, the new Link-MT3DMS Package requires an input file that, among other things, provides enhanced support for additional MODFLOW sink/source packages and allows list-directed (free) format for the flow model produced flow-transport link file. The report contains four parts: (a) documentation of the Link-MT3DMS Package Version 6 for MODFLOW-2000; (b) discussion of several issues related to simulation setup and input data preparation for running MT3DMS with MODFLOW-2000; (c) description of two test example problems, with comparison to results obtained using another MODFLOW-based transport program; and (d) overview of post-simulation visualization and animation using the U.S. Geological Survey?s Model Viewer.

  12. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.

  13. A method for brain 3D surface reconstruction from MR images

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin

    2014-09-01

    Due to the encephalic tissues are highly irregular, three-dimensional (3D) modeling of brain always leads to complicated computing. In this paper, we explore an efficient method for brain surface reconstruction from magnetic resonance (MR) images of head, which is helpful to surgery planning and tumor localization. A heuristic algorithm is proposed for surface triangle mesh generation with preserved features, and the diagonal length is regarded as the heuristic information to optimize the shape of triangle. The experimental results show that our approach not only reduces the computational complexity, but also completes 3D visualization with good quality.

  14. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  15. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  16. Three-dimensional landing zone ladar

    NASA Astrophysics Data System (ADS)

    Savage, James; Goodrich, Shawn; Burns, H. N.

    2016-05-01

    Three-Dimensional Landing Zone (3D-LZ) refers to a series of Air Force Research Laboratory (AFRL) programs to develop high-resolution, imaging ladar to address helicopter approach and landing in degraded visual environments with emphasis on brownout; cable warning and obstacle avoidance; and controlled flight into terrain. Initial efforts adapted ladar systems built for munition seekers, and success led to a the 3D-LZ Joint Capability Technology Demonstration (JCTD) , a 27-month program to develop and demonstrate a ladar subsystem that could be housed with the AN/AAQ-29 FLIR turret flown on US Air Force Combat Search and Rescue (CSAR) HH-60G Pave Hawk helicopters. Following the JCTD flight demonstration, further development focused on reducing size, weight, and power while continuing to refine the real-time geo-referencing, dust rejection, obstacle and cable avoidance, and Helicopter Terrain Awareness and Warning (HTAWS) capability demonstrated under the JCTD. This paper summarizes significant ladar technology development milestones to date, individual LADAR technologies within 3D-LZ, and results of the flight testing.

  17. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo

    2018-05-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.

  18. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies.

    PubMed

    Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo

    2018-05-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.

  19. Hydrodynamic characteristics of the two-phase flow field at gas-evolving electrodes: numerical and experimental studies

    PubMed Central

    Lu, Gui-Min; Yu, Jian-Guo

    2018-01-01

    Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study. PMID:29892347

  20. 3D MALDI Mass Spectrometry Imaging of a Single Cell: Spatial Mapping of Lipids in the Embryonic Development of Zebrafish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dueñas, Maria Emilia; Essner, Jeffrey J.; Lee, Young Jin

    The zebrafish ( Danio rerio) has been widely used as a model vertebrate system to study lipid metabolism, the roles of lipids in diseases, and lipid dynamics in embryonic development. Here, we applied high-spatial resolution matrix-assisted laser desorption/ionization (MALDI)-mass spectrometry imaging (MSI) to map and visualize the three-dimensional spatial distribution of phospholipid classes, phosphatidylcholine (PC), phosphatidylethanolamines (PE), and phosphatidylinositol (PI), in newly fertilized individual zebrafish embryos. This is the first time MALDI-MSI has been applied for three dimensional chemical imaging of a single cell. PC molecular species are present inside the yolk in addition to the blastodisc, while PE andmore » PI species are mostly absent in the yolk. Two-dimensional MSI was also studied for embryos at different cell stages (1-, 2-, 4-, 8-, and 16-cell stage) to investigate the localization changes of some lipids at various cell developmental stages. Lastly, four different normalization approaches were compared to find reliable relative quantification in 2D- and 3D- MALDI MSI data sets.« less

  1. 3D MALDI Mass Spectrometry Imaging of a Single Cell: Spatial Mapping of Lipids in the Embryonic Development of Zebrafish

    DOE PAGES

    Dueñas, Maria Emilia; Essner, Jeffrey J.; Lee, Young Jin

    2017-11-02

    The zebrafish ( Danio rerio) has been widely used as a model vertebrate system to study lipid metabolism, the roles of lipids in diseases, and lipid dynamics in embryonic development. Here, we applied high-spatial resolution matrix-assisted laser desorption/ionization (MALDI)-mass spectrometry imaging (MSI) to map and visualize the three-dimensional spatial distribution of phospholipid classes, phosphatidylcholine (PC), phosphatidylethanolamines (PE), and phosphatidylinositol (PI), in newly fertilized individual zebrafish embryos. This is the first time MALDI-MSI has been applied for three dimensional chemical imaging of a single cell. PC molecular species are present inside the yolk in addition to the blastodisc, while PE andmore » PI species are mostly absent in the yolk. Two-dimensional MSI was also studied for embryos at different cell stages (1-, 2-, 4-, 8-, and 16-cell stage) to investigate the localization changes of some lipids at various cell developmental stages. Lastly, four different normalization approaches were compared to find reliable relative quantification in 2D- and 3D- MALDI MSI data sets.« less

  2. Preoperative evaluation of venous systems with 3-dimensional contrast-enhanced magnetic resonance venography in brain tumors: comparison with time-of-flight magnetic resonance venography and digital subtraction angiography.

    PubMed

    Lee, Jong-Myung; Jung, Shin; Moon, Kyung-Sub; Seo, Jeong-Jin; Kim, In-Young; Jung, Tae-Young; Lee, Jung-Kil; Kang, Sam-Suk

    2005-08-01

    Recent developments in magnetic resonance (MR) technology now enable the use of MR venography, providing 3-dimensional (3D) images of intracranial venous structures. The purpose of this study was to assess the usefulness of 3D contrast-enhanced MR venography (CE MRV) in the evaluation of intracranial venous system for surgical planning of brain tumors. Forty patients underwent 3D CE MRV, as well as 25 patients, 2-dimensional (2D) time-of-flight (TOF) MR venography in axial and sagittal planes; and 10 patients, digital subtraction angiography. We determined the number of visualized sinuses and cortical veins. Degree of visualization of the intracranial venous system on 3D CE MRV was compared with that of 2D TOF MR venography and digital subtraction angiography as a standard. We also assessed the value of 3D CE MRV in the investigation of sinus occlusion or localization of cortical draining veins preoperatively. Superficial cortical veins and the dural sinus were better visualized on 3D CE MRV than on 2D TOF MR venography. Both MR venographic techniques visualized superior sagittal sinus, lateral sinus, sigmoid sinus, straight sinus, and internal cerebral vein and provided more detailed information by showing obstructed sinuses in brain tumors. Only 3D CE MRV showed superficial cortical draining veins. However, it was difficult to accurately evaluate the presence of cortical collateral venous drainage. Although we do not yet advocate MR venography to replace conventional angiography as the imaging standard for brain tumors, 3D CE MRV can be regarded as a valuable diagnostic method just in evaluating the status of major sinuses and localization of the cortical draining veins.

  3. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  4. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  5. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  6. Three-dimensional analysis of the early development of the dentition

    PubMed Central

    Peterkova, R; Hovorakova, M; Peterka, M; Lesot, H

    2014-01-01

    Tooth development has attracted the attention of researchers since the 19th century. It became obvious even then that morphogenesis could not fully be appreciated from two-dimensional histological sections. Therefore, methods of three-dimensional (3D) reconstructions were employed to visualize the surface morphology of developing structures and to help appreciate the complexity of early tooth morphogenesis. The present review surveys the data provided by computer-aided 3D analyses to update classical knowledge of early odontogenesis in the laboratory mouse and in humans. 3D reconstructions have demonstrated that odontogenesis in the early stages is a complex process which also includes the development of rudimentary odontogenic structures with different fates. Their developmental, evolutionary, and pathological aspects are discussed. The combination of in situ hybridization and 3D reconstruction have demonstrated the temporo-spatial dynamics of the signalling centres that reflect transient existence of rudimentary tooth primordia at loci where teeth were present in ancestors. The rudiments can rescue their suppressed development and revitalize, and then their subsequent autonomous development can give rise to oral pathologies. This shows that tooth-forming potential in mammals can be greater than that observed from their functional dentitions. From this perspective, the mouse rudimentary tooth primordia represent a natural model to test possibilities of tooth regeneration. PMID:24495023

  7. Synchrotron X-ray computed laminography of the three-dimensional anatomy of tomato leaves.

    PubMed

    Verboven, Pieter; Herremans, Els; Helfen, Lukas; Ho, Quang T; Abera, Metadel; Baumbach, Tilo; Wevers, Martine; Nicolaï, Bart M

    2015-01-01

    Synchrotron radiation computed laminography (SR-CL) is presented as an imaging method for analyzing the three-dimensional (3D) anatomy of leaves. The SR-CL method was used to provide 3D images of 1-mm² samples of intact leaves at a pixel resolution of 750 nm. The method allowed visualization and quantitative analysis of palisade and spongy mesophyll cells, and showed local venation patterns, aspects of xylem vascular structure and stomata. The method failed to image subcellular organelles such as chloroplasts. We constructed 3D computer models of leaves that can provide a basis for calculating gas exchange, light penetration and water and solute transport. The leaf anatomy of two different tomato genotypes grown in saturating light conditions was compared by 3D analysis. Differences were found in calculated values of tissue porosity, cell number density, cell area to volume ratio and cell volume and cell shape distributions of palisade and spongy cell layers. In contrast, the exposed cell area to leaf area ratio in mesophyll, a descriptor that correlates to the maximum rate of photosynthesis in saturated light conditions, was no different between spongy and palisade cells or between genotypes. The use of 3D image processing avoids many of the limitations of anatomical analysis with two-dimensional sections. © 2014 The Authors The Plant Journal © 2014 John Wiley & Sons Ltd.

  8. Visuo-Vestibular Interactions

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session TA3 includes short reports covering: (1) Vestibulo-Oculomotor Interaction in Long-Term Microgravity; (2) Effects of Weightlessness on the Spatial Orientation of Visually Induced Eye Movements; (3) Adaptive Modification of the Three-Dimensional Vestibulo-Ocular Reflex during Prolonged Microgravity; (4) The Dynamic Change of Brain Potential Related to Selective Attention to Visual Signals from Left and Right Visual Fields; (5) Locomotor Errors Caused by Vestibular Suppression; and (6) A Novel, Image-Based Technique for Three-Dimensional Eye Measurement.

  9. Two-dimensional and three-dimensional dynamic imaging of live biofilms in a microchannel by time-of-flight secondary ion mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, Xin; Marshall, Matthew J.; Xiong, Yijia

    2015-05-01

    A vacuum compatible microfluidic reactor, SALVI (System for Analysis at the Liquid Vacuum Interface) was employed for in situ chemical imaging of live biofilms using time-of-flight secondary ion mass spectrometry (ToF-SIMS). Depth profiling by sputtering materials in sequential layers resulted in live biofilm spatial chemical mapping. 2D images were reconstructed to report the first 3D images of hydrated biofilm elucidating spatial and chemical heterogeneity. 2D image principal component analysis (PCA) was conducted among biofilms at different locations in the microchannel. Our approach directly visualized spatial and chemical heterogeneity within the living biofilm by dynamic liquid ToF-SIMS.

  10. A compact structured light based otoscope for three dimensional imaging of the tympanic membrane

    NASA Astrophysics Data System (ADS)

    Das, Anshuman J.; Estrada, Julio C.; Ge, Zhifei; Dolcetti, Sara; Chen, Deborah; Raskar, Ramesh

    2015-02-01

    Three dimensional (3D) imaging of the tympanic membrane (TM) has been carried out using a traditional otoscope equipped with a high-definition webcam, a portable projector and a telecentric optical system. The device allows us to project fringe patterns on the TM and the magnified image is processed using phase shifting algorithms to arrive at a 3D description of the TM. Obtaining a 3D image of the TM can aid in the diagnosis of ear infections such as otitis media with effusion, which is essentially fluid build-up in the middle ear. The high resolution of this device makes it possible examine a computer generated 3D profile for abnormalities in the shape of the eardrum. This adds an additional dimension to the image that can be obtained from a traditional otoscope by allowing visualization of the TM from different perspectives. In this paper, we present the design and construction of this device and details of the imaging processing for recovering the 3D profile of the subject under test. The design of the otoscope is similar to that of the traditional device making it ergonomically compatible and easy to adopt in clinical practice.

  11. Medical three-dimensional printing opens up new opportunities in cardiology and cardiac surgery.

    PubMed

    Bartel, Thomas; Rivard, Andrew; Jimenez, Alejandro; Mestres, Carlos A; Müller, Silvana

    2018-04-14

    Advanced percutaneous and surgical procedures in structural and congenital heart disease require precise pre-procedural planning and continuous quality control. Although current imaging modalities and post-processing software assists with peri-procedural guidance, their capabilities for spatial conceptualization remain limited in two- and three-dimensional representations. In contrast, 3D printing offers not only improved visualization for procedural planning, but provides substantial information on the accuracy of surgical reconstruction and device implantations. Peri-procedural 3D printing has the potential to set standards of quality assurance and individualized healthcare in cardiovascular medicine and surgery. Nowadays, a variety of clinical applications are available showing how accurate 3D computer reformatting and physical 3D printouts of native anatomy, embedded pathology, and implants are and how they may assist in the development of innovative therapies. Accurate imaging of pathology including target region for intervention, its anatomic features and spatial relation to the surrounding structures is critical for selecting optimal approach and evaluation of procedural results. This review describes clinical applications of 3D printing, outlines current limitations, and highlights future implications for quality control, advanced medical education and training.

  12. Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.

    PubMed

    Zeri, Fabrizio; Livi, Stefano

    2015-05-01

    This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms experienced by S3D movie spectators based on retrospective visual comfort assessments, showed a higher level of external symptoms (eye burning, eye ache, tearing, etc.) when compared to the internal ones that are typically more perceptual (blurred vision, double vision, headache, etc.). Furthermore, spectators of S3D movies reported statistically higher symptoms when compared to 2D spectators. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  13. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    NASA Astrophysics Data System (ADS)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  14. Program Package for 3d PIC Model of Plasma Fiber

    NASA Astrophysics Data System (ADS)

    Kulhánek, Petr; Břeň, David

    2007-08-01

    A fully three dimensional Particle in Cell model of the plasma fiber had been developed. The code is written in FORTRAN 95, implementation CVF (Compaq Visual Fortran) under Microsoft Visual Studio user interface. Five particle solvers and two field solvers are included in the model. The solvers have relativistic and non-relativistic variants. The model can deal both with periodical and non-periodical boundary conditions. The mechanism of the surface turbulences generation in the plasma fiber was successfully simulated with the PIC program package.

  15. Automatic delineation and 3D visualization of the human ventricular system using probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Hatfield, Fraser N.; Dehmeshki, Jamshid

    1998-09-01

    Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.

  16. Evolution of stereoscopic imaging in surgery and recent advances

    PubMed Central

    Schwab, Katie; Smith, Ralph; Brown, Vanessa; Whyte, Martin; Jourdan, Iain

    2017-01-01

    In the late 1980s the first laparoscopic cholecystectomies were performed prompting a sudden rise in technological innovations as the benefits and feasibility of minimal access surgery became recognised. Monocular laparoscopes provided only two-dimensional (2D) viewing with reduced depth perception and contributed to an extended learning curve. Attention turned to producing a usable three-dimensional (3D) endoscopic view for surgeons; utilising different technologies for image capture and image projection. These evolving visual systems have been assessed in various research environments with conflicting outcomes of success and usability, and no overall consensus to their benefit. This review article aims to provide an explanation of the different types of technologies, summarise the published literature evaluating 3D vs 2D laparoscopy, to explain the conflicting outcomes, and discuss the current consensus view. PMID:28874957

  17. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    PubMed

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  18. Three-dimensional Talairach-Tournoux brain atlas

    NASA Astrophysics Data System (ADS)

    Fang, Anthony; Nowinski, Wieslaw L.; Nguyen, Bonnie T.; Bryan, R. Nick

    1995-04-01

    The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.

  19. ART 3.5D: an algorithm to label arteries and veins from three-dimensional angiography.

    PubMed

    Barra, Beatrice; De Momi, Elena; Ferrigno, Giancarlo; Pero, Guglielmo; Cardinale, Francesco; Baselli, Giuseppe

    2016-10-01

    Preoperative three-dimensional (3-D) visualization of brain vasculature by digital subtraction angiography from computerized tomography (CT) in neurosurgery is gaining more and more importance, since vessels are the primary landmarks both for organs at risk and for navigation. Surgical embolization of cerebral aneurysms and arteriovenous malformations, epilepsy surgery, and stereoelectroencephalography are a few examples. Contrast-enhanced cone-beam computed tomography (CE-CBCT) represents a powerful facility, since it is capable of acquiring images in the operation room, shortly before surgery. However, standard 3-D reconstructions do not provide a direct distinction between arteries and veins, which is of utmost importance and is left to the surgeon's inference so far. Pioneering attempts by true four-dimensional (4-D) CT perfusion scans were already described, though at the expense of longer acquisition protocols, higher dosages, and sensible resolution losses. Hence, space is open to approaches attempting to recover the contrast dynamics from standard CE-CBCT, on the basis of anomalies overlooked in the standard 3-D approach. This paper aims at presenting algebraic reconstruction technique (ART) 3.5D, a method that overcomes the clinical limitations of 4-D CT, from standard 3-D CE-CBCT scans. The strategy works on the 3-D angiography, previously segmented in the standard way, and reprocesses the dynamics hidden in the raw data to recover an approximate dynamics in each segmented voxel. Next, a classification algorithm labels the angiographic voxels and artery or vein. Numerical simulations were performed on a digital phantom of a simplified 3-D vasculature with contrast transit. CE-CBCT projections were simulated and used for ART 3.5D testing. We achieved up to 90% classification accuracy in simulations, proving the feasibility of the presented approach for dynamic information recovery for arteries and veins segmentation.

  20. Enhancing Learning Using 3D Printing: An Alternative to Traditional Student Project Methods

    ERIC Educational Resources Information Center

    McGahern, Patricia; Bosch, Frances; Poli, DorothyBelle

    2015-01-01

    Student engagement during the development of a three-dimensional visual aid or teaching model can vary for a number of reasons. Some students report that they are not "creative" or "good at art," often as an excuse to justify less professional outcomes. Student engagement can be low when using traditional methods to produce a…

  1. Effect of the retinal size of a peripheral cue on attentional orienting in two- and three-dimensional worlds.

    PubMed

    Jiang, Yizhou; Li, Sijie; Li, You; Zeng, Hang; Chen, Qi

    2016-07-01

    It has been documented that due to limited attentional resources, the size of the attentional focus is inversely correlated with processing efficiency. Moreover, by adopting a variety of two-dimensional size illusions induced by pictorial depth cues (e.g., the Ponzo illusion), previous studies have revealed that the perceived, rather than the retinal, size of an object determines its detection. It remains unclear, however, whether and how the retinal versus perceived size of a cue influences the process of attentional orienting to subsequent targets, and whether the corresponding influencing processes differ between two-dimensional (2-D) and three-dimensional (3-D) space. In the present study, we incorporated the dot probe paradigm with either a 2-D Ponzo illusion, induced by pictorial depth cues, or a virtual 3-D world in which the Ponzo illusion turned into visual reality. By varying the retinal size of the cue while keeping its perceived size constant (Exp. 1), we found that a cue with smaller retinal size significantly facilitated attentional orienting as compared to a cue with larger retinal size, and that the effects were comparable between 2-D and 3-D displays. Furthermore, when the pictorial background was removed and the cue display was positioned in either the farther or the closer depth plane (Exp. 2), or when both the depth and the background were removed (Exp. 3), the retinal size, rather than the depth, of the cue still affected attentional orienting. Taken together, our results suggest that the retinal size of a cue plays the crucial role in the visuospatial orienting of attention in both 2-D and 3-D.

  2. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  3. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  4. A confusing world: what to call histology of three-dimensional tumour margins?

    PubMed

    Moehrle, M; Breuninger, H; Röcken, M

    2007-05-01

    Complete three-dimensional histology of excised skin tumour margins has a long tradition and, unfortunately, a multitude of names as well. Mohs, who introduced it, called it 'microscopically controlled surgery'. Others have described it as 'micrographic surgery', 'Mohs' micrographic surgery', or simply 'Mohs' surgery'. Semantic confusion became truly rampant when variant forms, each useful in its own way for detecting subclinical outgrowths of malignant skin tumours, were later introduced under such names as histographic surgery, systematic histologic control of the tumour bed, histological control of excised tissue margins, the square procedure, the perimeter technique, etc. All of these methods are basically identical in concept. All involve complete, three-dimensional histological visualization and evaluation of excision margins. Their common goal is to detect unseen tumour outgrowths. For greater clarity, the authors of this paper recommend general adoption of '3D histology' as a collective designation for all the above methods. As an added advantage, 3D histology can also be used in other medical disciplines to confirm true R0 resection of, for example, breast cancer or intestinal cancer.

  5. Cryo-electron microscopy and cryo-electron tomography of nanoparticles.

    PubMed

    Stewart, Phoebe L

    2017-03-01

    Cryo-transmission electron microscopy (cryo-TEM or cryo-EM) and cryo-electron tomography (cryo-ET) offer robust and powerful ways to visualize nanoparticles. These techniques involve imaging of the sample in a frozen-hydrated state, allowing visualization of nanoparticles essentially as they exist in solution. Cryo-TEM grid preparation can be performed with the sample in aqueous solvents or in various organic and ionic solvents. Two-dimensional (2D) cryo-TEM provides a direct way to visualize the polydispersity within a nanoparticle preparation. Fourier transforms of cryo-TEM images can confirm the structural periodicity within a sample. While measurement of specimen parameters can be performed with 2D TEM images, determination of a three-dimensional (3D) structure often facilitates more spatially accurate quantization. 3D structures can be determined in one of two ways. If the nanoparticle has a homogeneous structure, then 2D projection images of different particles can be averaged using a computational process referred to as single particle reconstruction. Alternatively, if the nanoparticle has a heterogeneous structure, then a structure can be generated by cryo-ET. This involves collecting a tilt-series of 2D projection images for a defined region of the grid, which can be used to generate a 3D tomogram. Occasionally it is advantageous to calculate both a single particle reconstruction, to reveal the regular portions of a nanoparticle structure, and a cryo-electron tomogram, to reveal the irregular features. A sampling of 2D cryo-TEM images and 3D structures are presented for protein based, DNA based, lipid based, and polymer based nanoparticles. WIREs Nanomed Nanobiotechnol 2017, 9:e1417. doi: 10.1002/wnan.1417 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  6. Volcanic Gas Emissions Mapping Using a Mass Spectrometer System

    NASA Technical Reports Server (NTRS)

    Griffin, Timothy P.; Diaz, J. Andres

    2008-01-01

    The visualization of hazardous gaseous emissions at volcanoes using in-situ mass spectrometry (MS) is a key step towards a better comprehension of the geophysical phenomena surrounding eruptive activity. In-Situ gas data consisting of helium, carbon dioxide, sulfur dioxide, and other gas species, were acquired with an MS system. MS and global position system (GPS) data were plotted on ground imagery, topography, and remote sensing data collected by a host of instruments during the second Costa Rica Airborne Research and Technology Applications (CARTA) mission This combination of gas and imaging data allowed 3-dimensional (3-D) visualization of the volcanic plume end the mapping of gas concentration at several volcanic structures and urban areas This combined set of data has demonstrated a better tool to assess hazardous conditions by visualizing and modeling of possible scenarios of volcanic activity. The MS system is used for in-situ measurement of three-dimensional gas concentrations at different volcanic locations with three different transportation platforms, aircraft, auto, and hand carried. The demonstration for urban contamination mapping is also presented as another possible use for the MS system.

  7. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954

  8. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.

  9. Ultrahigh-Speed Optical Coherence Tomography for Three-Dimensional and En Face Imaging of the Retina and Optic Nerve Head

    PubMed Central

    Srinivasan, Vivek J.; Adler, Desmond C.; Chen, Yueli; Gorczynska, Iwona; Huber, Robert; Duker, Jay S.; Schuman, Joel S.; Fujimoto, James G.

    2009-01-01

    Purpose To demonstrate ultrahigh-speed optical coherence tomography (OCT) imaging of the retina and optic nerve head at 249,000 axial scans per second and a wavelength of 1060 nm. To investigate methods for visualization of the retina, choroid, and optic nerve using high-density sampling enabled by improved imaging speed. Methods A swept-source OCT retinal imaging system operating at a speed of 249,000 axial scans per second was developed. Imaging of the retina, choroid, and optic nerve were performed. Display methods such as speckle reduction, slicing along arbitrary planes, en face visualization of reflectance from specific retinal layers, and image compounding were investigated. Results High-definition and three-dimensional (3D) imaging of the normal retina and optic nerve head were performed. Increased light penetration at 1060 nm enabled improved visualization of the choroid, lamina cribrosa, and sclera. OCT fundus images and 3D visualizations were generated with higher pixel density and less motion artifacts than standard spectral/Fourier domain OCT. En face images enabled visualization of the porous structure of the lamina cribrosa, nerve fiber layer, choroid, photoreceptors, RPE, and capillaries of the inner retina. Conclusions Ultrahigh-speed OCT imaging of the retina and optic nerve head at 249,000 axial scans per second is possible. The improvement of ∼5 to 10× in imaging speed over commercial spectral/Fourier domain OCT technology enables higher density raster scan protocols and improved performance of en face visualization methods. The combination of the longer wavelength and ultrahigh imaging speed enables excellent visualization of the choroid, sclera, and lamina cribrosa. PMID:18658089

  10. Clinical validation of coronal and sagittal spinal curve measurements based on three-dimensional vertebra vector parameters.

    PubMed

    Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás

    2012-10-01

    For many decades, visualization and evaluation of three-dimensional (3D) spinal deformities have only been possible by two-dimensional (2D) radiodiagnostic methods, and as a result, characterization and classification were based on 2D terminologies. Recent developments in medical digital imaging and 3D visualization techniques including surface 3D reconstructions opened a chance for a long-sought change in this field. Supported by a 3D Terminology on Spinal Deformities of the Scoliosis Research Society, an approach for 3D measurements and a new 3D classification of scoliosis yielded several compelling concepts on 3D visualization and new proposals for 3D classification in recent years. More recently, a new proposal for visualization and complete 3D evaluation of the spine by 3D vertebra vectors has been introduced by our workgroup, a concept, based on EOS 2D/3D, a groundbreaking new ultralow radiation dose integrated orthopedic imaging device with sterEOS 3D spine reconstruction software. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and vertebra vector-based 3D measurements in a routine clinical setting. Retrospective, nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4 and 117.5°. Analysis of accuracy and reliability of measurements was carried out on a group of all patients and in subgroups based on coronal plane deviation: 0 to 10° (Group 1; n=36), 10 to 25° (Group 2; n=25), 25 to 50° (Group 3; n=69), 50 to 75° (Group 4; n=49), and above 75° (Group 5; n=22). All study subjects were examined by EOS 2D imaging, resulting in anteroposterior (AP) and lateral (LAT) full spine, orthogonal digital X-ray images, in standing position. Conventional coronal and sagittal curvature measurements including sagittal L5 vertebra wedges were determined by 3 experienced examiners, using traditional Cobb methods on EOS 2D AP and LAT images. Vertebra vector-based measurements were performed as published earlier, based on computer-assisted calculations of corresponding spinal curvature. Vertebra vectors were generated by dedicated software from sterEOS 3D spine models reconstructed from EOS 2D images by the same three examiners. Manual measurements were performed by each examiner, thrice for sterEOS 3D reconstructions and twice for vertebra vector-based measurements. Means comparison t test, Pearson bivariate correlation analysis, reliability analysis by intraclass correlation coefficients for intraobserver reproducibility and interrater reliability were performed using SPSS v16.0 software. In comparison with manual 2D methods, only small and nonsignificant differences were detectable in vertebra vector-based curvature data for coronal curves and thoracic kyphosis, whereas the found difference in L1-L5 lordosis values was shown to be strongly related to the magnitude of corresponding L5 wedge. Intraobserver reliability was excellent for both methods, and interrater reproducibility was consistently higher for vertebra vector-based methods that was also found to be unaffected by the magnitude of coronal curves or sagittal plane deviations. Vertebra vector-based angulation measurements could fully substitute conventional manual 2D measurements, with similar accuracy and higher intraobserver reliability and interrater reproducibility. Vertebra vectors represent a truly 3D solution for clear and comprehensible 3D visualization of spinal deformities while preserving crucial parametric information for vertebral size, 3D position, orientation, and rotation. The concept of vertebra vectors may serve as a starting point to a valid and clinically useful alternative for a new 3D classification of scoliosis. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Comparison of three-dimensional vs. conventional radiotherapy in saving optic tract in paranasal sinus tumors.

    PubMed

    Kamian, S; Kazemian, A; Esfahani, M; Mohammadi, E; Aghili, M

    2010-01-01

    To assess the possibility of delivering a homogeneous irradiation with respect to maximal tolerated dose to the optic pathway for paranasal sinus (PNS) tumors. Treatment planning with conformal three-dimensional (3D) and conventional two-dimensional (2D) was done on CT scans of 20 patients who had early or advanced PNS tumors. Four cases had been previously irradiated. Dose-volume histograms (DVH) for the planning target volume (PTV) and the visual pathway including globes, chiasma and optic nerves were compared between the 2 treatment plannings. The area under curve (AUC) in the DVH of the globes on the same side and contralateral side of tumor involvement was significantly higher in 2D planning (p <0.05), which caused higher integral dose to both globes. Also, the AUC in the DVH of chiasma was higher in 2D treatment planning (p=0.002). The integral dose to the contralateral optic nerve was significantly lower with 3D planning (p=0.007), but there was no significant difference for the optic nerve which was on the same side of tumor involvement (p >0.05). The AUC in the DVH of PTV was not significant (201.1 + or - 16.23 mm(3) in 2D planning vs. 201.15 + or - 15.09 mm(3) in 3D planning). The volume of PTV which received 90% of the prescribed dose was 96.9 + or - 4.41 cm(3) in 2D planning and 97.2 + or - 2.61 cm(3) in 3D planning (p >0.05). 3D conformal radiotherapy (RT) for PNS tumors enables the delivery of radiation to the tumor with respect to critical organs with a lower toxicity to the optic pathway.

  12. Human red blood cell recognition enhancement with three-dimensional morphological features obtained by digital holographic imaging

    NASA Astrophysics Data System (ADS)

    Jaferzadeh, Keyvan; Moon, Inkyu

    2016-12-01

    The classification of erythrocytes plays an important role in the field of hematological diagnosis, specifically blood disorders. Since the biconcave shape of red blood cell (RBC) is altered during the different stages of hematological disorders, we believe that the three-dimensional (3-D) morphological features of erythrocyte provide better classification results than conventional two-dimensional (2-D) features. Therefore, we introduce a set of 3-D features related to the morphological and chemical properties of RBC profile and try to evaluate the discrimination power of these features against 2-D features with a neural network classifier. The 3-D features include erythrocyte surface area, volume, average cell thickness, sphericity index, sphericity coefficient and functionality factor, MCH and MCHSD, and two newly introduced features extracted from the ring section of RBC at the single-cell level. In contrast, the 2-D features are RBC projected surface area, perimeter, radius, elongation, and projected surface area to perimeter ratio. All features are obtained from images visualized by off-axis digital holographic microscopy with a numerical reconstruction algorithm, and four categories of biconcave (doughnut shape), flat-disc, stomatocyte, and echinospherocyte RBCs are interested. Our experimental results demonstrate that the 3-D features can be more useful in RBC classification than the 2-D features. Finally, we choose the best feature set of the 2-D and 3-D features by sequential forward feature selection technique, which yields better discrimination results. We believe that the final feature set evaluated with a neural network classification strategy can improve the RBC classification accuracy.

  13. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  14. Making data matter: Voxel printing for the digital fabrication of data across scales and domains.

    PubMed

    Bader, Christoph; Kolb, Dominik; Weaver, James C; Sharma, Sunanda; Hosny, Ahmed; Costa, João; Oxman, Neri

    2018-05-01

    We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.

  15. AstroVis: Visualizing astronomical data cubes

    NASA Astrophysics Data System (ADS)

    Finniss, Stephen; Tyler, Robin; Questiaux, Jacques

    2016-08-01

    AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.

  16. Insights into the three-dimensional Lagrangian geometry of the Antarctic polar vortex

    NASA Astrophysics Data System (ADS)

    Curbelo, Jezabel; José García-Garrido, Víctor; Mechoso, Carlos Roberto; Mancho, Ana Maria; Wiggins, Stephen; Niang, Coumba

    2017-07-01

    In this paper we study the three-dimensional (3-D) Lagrangian structures in the stratospheric polar vortex (SPV) above Antarctica. We analyse and visualize these structures using Lagrangian descriptor function M. The procedure for calculation with reanalysis data is explained. Benchmarks are computed and analysed that allow us to compare 2-D and 3-D aspects of Lagrangian transport. Dynamical systems concepts appropriate to 3-D, such as normally hyperbolic invariant curves, are discussed and applied. In order to illustrate our approach we select an interval of time in which the SPV is relatively undisturbed (August 1979) and an interval of rapid SPV changes (October 1979). Our results provide new insights into the Lagrangian structure of the vertical extension of the stratospheric polar vortex and its evolution. Our results also show complex Lagrangian patterns indicative of strong mixing processes in the upper troposphere and lower stratosphere. Finally, during the transition to summer in the late spring, we illustrate the vertical structure of two counterrotating vortices, one the polar and the other an emerging one, and the invariant separatrix that divides them.

  17. Real-time three-dimensional color Doppler echocardiography for characterizing the spatial velocity distribution and quantifying the peak flow rate in the left ventricular outflow tract

    NASA Technical Reports Server (NTRS)

    Tsujino, H.; Jones, M.; Shiota, T.; Qin, J. X.; Greenberg, N. L.; Cardon, L. A.; Morehead, A. J.; Zetts, A. D.; Travaglini, A.; Bauer, F.; hide

    2001-01-01

    Quantification of flow with pulsed-wave Doppler assumes a "flat" velocity profile in the left ventricular outflow tract (LVOT), which observation refutes. Recent development of real-time, three-dimensional (3-D) color Doppler allows one to obtain an entire cross-sectional velocity distribution of the LVOT, which is not possible using conventional 2-D echo. In an animal experiment, the cross-sectional color Doppler images of the LVOT at peak systole were derived and digitally transferred to a computer to visualize and quantify spatial velocity distributions and peak flow rates. Markedly skewed profiles, with higher velocities toward the septum, were consistently observed. Reference peak flow rates by electromagnetic flow meter correlated well with 3-D peak flow rates (r = 0.94), but with an anticipated underestimation. Real-time 3-D color Doppler echocardiography was capable of determining cross-sectional velocity distributions and peak flow rates, demonstrating the utility of this new method for better understanding and quantifying blood flow phenomena.

  18. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  19. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  20. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  1. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  2. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  3. LSSGalPy: Interactive Visualization of the Large-scale Environment Around Galaxies

    NASA Astrophysics Data System (ADS)

    Argudo-Fernández, M.; Duarte Puertas, S.; Ruiz, J. E.; Sabater, J.; Verley, S.; Bergond, G.

    2017-05-01

    New tools are needed to handle the growth of data in astrophysics delivered by recent and upcoming surveys. We aim to build open-source, light, flexible, and interactive software designed to visualize extensive three-dimensional (3D) tabular data. Entirely written in the Python language, we have developed interactive tools to browse and visualize the positions of galaxies in the universe and their positions with respect to its large-scale structures (LSS). Motivated by a previous study, we created two codes using Mollweide projection and wedge diagram visualizations, where survey galaxies can be overplotted on the LSS of the universe. These are interactive representations where the visualizations can be controlled by widgets. We have released these open-source codes that have been designed to be easily re-used and customized by the scientific community to fulfill their needs. The codes are adaptable to other kinds of 3D tabular data and are robust enough to handle several millions of objects. .

  4. Neural correlates of visuospatial consciousness in 3D default space: insights from contralateral neglect syndrome.

    PubMed

    Jerath, Ravinder; Crawford, Molly W

    2014-08-01

    One of the most compelling questions still unanswered in neuroscience is how consciousness arises. In this article, we examine visual processing, the parietal lobe, and contralateral neglect syndrome as a window into consciousness and how the brain functions as the mind and we introduce a mechanism for the processing of visual information and its role in consciousness. We propose that consciousness arises from integration of information from throughout the body and brain by the thalamus and that the thalamus reimages visual and other sensory information from throughout the cortex in a default three-dimensional space in the mind. We further suggest that the thalamus generates a dynamic default three-dimensional space by integrating processed information from corticothalamic feedback loops, creating an infrastructure that may form the basis of our consciousness. Further experimental evidence is needed to examine and support this hypothesis, the role of the thalamus, and to further elucidate the mechanism of consciousness. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  6. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    PubMed

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  7. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  8. Basic as well as detailed neurosonograms can be performed by offline analysis of three-dimensional fetal brain volumes.

    PubMed

    Bornstein, E; Monteagudo, A; Santos, R; Strock, I; Tsymbal, T; Lenchner, E; Timor-Tritsch, I E

    2010-07-01

    To evaluate the feasibility and the processing time of offline analysis of three-dimensional (3D) brain volumes to perform a basic, as well as a detailed, targeted, fetal neurosonogram. 3D fetal brain volumes were obtained in 103 consecutive healthy fetuses that underwent routine anatomical survey at 20-23 postmenstrual weeks. Transabdominal gray-scale and power Doppler volumes of the fetal brain were acquired by one of three experienced sonographers (an average of seven volumes per fetus). Acquisition was first attempted in the sagittal and coronal planes. When the fetal position did not enable easy and rapid access to these planes, axial acquisition at the level of the biparietal diameter was performed. Offline analysis of each volume was performed by two of the authors in a blinded manner. A systematic technique of 'volume manipulation' was used to identify a list of 25 brain dimensions/structures comprising a complete basic evaluation, intracranial biometry and a detailed targeted fetal neurosonogram. The feasibility and reproducibility of obtaining diagnostic-quality images of the different structures was evaluated, and processing times were recorded, by the two examiners. Diagnostic-quality visualization was feasible in all of the 25 structures, with an excellent visualization rate (85-100%) reported in 18 structures, a good visualization rate (69-97%) reported in five structures and a low visualization rate (38-54%) reported in two structures, by the two examiners. An average of 4.3 and 5.4 volumes were used to complete the examination by the two examiners, with a mean processing time of 7.2 and 8.8 minutes, respectively. The overall agreement rate for diagnostic visualization of the different brain structures between the two examiners was 89.9%, with a kappa coefficient of 0.5 (P < 0.001). In experienced hands, offline analysis of 3D brain volumes is a reproducible modality that can identify all structures necessary to complete both a basic and a detailed second-trimester fetal neurosonogram. Copyright 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  9. A Meta-Analysis of the Educational Effectiveness of Three-Dimensional Visualization Technologies in Teaching Anatomy

    ERIC Educational Resources Information Center

    Yammine, Kaissar; Violato, Claudio

    2015-01-01

    Many medical graduates are deficient in anatomy knowledge and perhaps below the standards for safe medical practice. Three-dimensional visualization technology (3DVT) has been advanced as a promising tool to enhance anatomy knowledge. The purpose of this review is to conduct a meta-analysis of the effectiveness of 3DVT in teaching and learning…

  10. Language-driven anticipatory eye movements in virtual reality.

    PubMed

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  11. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  12. 2D Echocardiographic Evaluation of Right Ventricular Function Correlates With 3D Volumetric Models in Cardiac Surgery Patients.

    PubMed

    Magunia, Harry; Schmid, Eckhard; Hilberath, Jan N; Häberle, Leo; Grasshoff, Christian; Schlensak, Christian; Rosenberger, Peter; Nowak-Machen, Martina

    2017-04-01

    The early diagnosis and treatment of right ventricular (RV) dysfunction are of critical importance in cardiac surgery patients and impact clinical outcome. Two-dimensional (2D) transesophageal echocardiography (TEE) can be used to evaluate RV function using surrogate parameters due to complex RV geometry. The aim of this study was to evaluate whether the commonly used visual evaluation of RV function and size using 2D TEE correlated with the calculated three-dimensional (3D) volumetric models of RV function. Retrospective study, single center, University Hospital. Seventy complete datasets were studied consisting of 2D 4-chamber view loops (2-3 beats) and the corresponding 4-chamber view 3D full-volume loop of the right ventricle. RV function and RV size of the 2D loops then were assessed retrospectively purely qualitatively individually by 4 clinician echocardiographers certified in perioperative TEE. Corresponding 3D volumetric models calculating RV ejection fraction and RV end-diastolic volumes then were established and compared with the 2D assessments. 2D assessment of RV function correlated with 3D volumetric calculations (Spearman's rho -0.5; p<0.0001). No correlation could be established between 2D estimates of RV size and actual 3D volumetric end-diastolic volumes (Spearman's rho 0.15; p = 0.25). The 2D assessment of right ventricular function based on visual estimation as frequently used in clinical practice appeared to be a reliable method of RV functional evaluation. However, 2D assessment of RV size seemed unreliable and should be used with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A survey of visually induced symptoms and associated factors in spectators of three dimensional stereoscopic movies

    PubMed Central

    2012-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) computer generated images has raised concern about image safety and possible side effects on population health. This study aims to (1) quantify the occurrence of visually induced symptoms suffered by the spectators during and after viewing a commercial 3D movie and (2) to assess individual and environmental factors associated to those symptoms. Methods A cross-sectional survey was carried out using a paper based, self administered questionnaire. The questionnaire includes individual and movie characteristics and selected visually induced symptoms (tired eyes, double vision, headache, dizziness, nausea and palpitations). Symptoms were queried at 3 different times: during, right after and after 2 hours from the movie. Results We collected 953 questionnaires. In our sample, 539 (60.4%) individuals reported 1 or more symptoms during the movie, 392 (43.2%) right after and 139 (15.3%) at 2 hours from the movie. The most frequently reported symptoms were tired eyes (during the movie by 34.8%, right after by 24.0%, after 2 hours by 5.7% of individuals) and headache (during the movie by 13.7%, right after by 16.8%, after 2 hours by 8.3% of individuals). Individual history for frequent headache was associated with tired eyes (OR = 1.34, 95%CI = 1.01-1.79), double vision (OR = 1.96; 95%CI = 1.13-3.41), headache (OR = 2.09; 95%CI = 1.41-3.10) during the movie and of headache after the movie (OR = 1.64; 95%CI = 1.16-2.32). Individual susceptibility to car sickness, dizziness, anxiety level, movie show time, animation 3D movie were also associated to several other symptoms. Conclusions The high occurrence of visually induced symptoms resulting from this survey suggests the need of raising public awareness on possible discomfort that susceptible individuals may suffer during and after the vision of 3D movies. PMID:22974235

  14. Echocardiographic anatomy of the mitral valve: a critical appraisal of 2-dimensional imaging protocols with a 3-dimensional perspective.

    PubMed

    Mahmood, Feroze; Hess, Philip E; Matyal, Robina; Mackensen, G Burkhard; Wang, Angela; Qazi, Aisha; Panzica, Peter J; Lerner, Adam B; Maslow, Andrew

    2012-10-01

    To highlight the limitations of traditional 2-dimensional (2D) echocardiographic mitral valve (MV) examination methodologies, which do not account for patient-specific transesophageal echocardiographic (TEE) probe adjustments made during an actual clinical perioperative TEE examination. Institutional quality-improvement project. Tertiary care hospital. Attending anesthesiologists certified by the National Board of Echocardiography. Using the technique of multiplanar reformatting with 3-dimensional (3D) data, ambiguous 2D images of the MV were generated, which resembled standard midesophageal 2D views. Based on the 3D image, the MV scallops visualized in each 2D image were recognized exactly by the position of the scan plane. Twenty-three such 2D MV images were created in a presentation from the 3D datasets. Anesthesia staff members (n = 13) were invited to view the presentation based on the 2D images only and asked to identify the MV scallops. Their responses were scored as correct or incorrect based on the 3D image. The overall accuracy was 30.4% in identifying the MV scallops. The transcommissural view was identified correctly >90% of the time. The accuracy of the identification of A1, A3, P1, and P3 scallops was <50%. The accuracy of the identification of A2P2 scallops was ≥50%. In the absence of information on TEE probe adjustments performed to acquire a specific MV image, it is possible to misidentify the scallops. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Proceedings of the Conference of the International Group for the Psychology of Mathematics Education (PME 20) (20th, Valencia, Spain, July 8-12, 1996). Volume 1.

    ERIC Educational Resources Information Center

    Puig, Luis, Ed.; Gutierrez, Angel, Ed.

    The first volume of this proceedings contains three plenary addresses: (1) "Visualization in 3-dimensional geometry: In search of a framework" (A. Gutierrez); (2) "The ongoing value of proof" (G. Hanna); and (3) "Modern times: The symbolic surfaces of language, mathematics and art" (D. Pimm). Plenary panels include: (1) "Contribution to the panel…

  16. 3D Printing of Biomolecular Models for Research and Pedagogy

    PubMed Central

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-01-01

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403

  17. 4D Biofabrication of Branching Multicellular Structures: A Morphogenesis Simulation Based on Turing’s Reaction-Diffusion Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaolu; Yang, Hao

    2017-12-01

    The recently emerged four-dimensional (4D) biofabrication technique aims to create dynamic three-dimensional (3D) biological structures that can transform their shapes or functionalities with time when an external stimulus is imposed or when cell postprinting self-assembly occurs. The evolution of 3D pattern of branching geometry via self-assembly of cells is critical for 4D biofabrication of artificial organs or tissues with branched geometry. However, it is still unclear that how the formation and evolution of these branching pattern are biologically encoded. We study the 4D fabrication of lung branching structures utilizing a simulation model on the reaction-diffusion mechanism, which is established using partial differential equations of four variables, describing the reaction and diffusion process of morphogens with time during the development process of lung branching. The simulation results present the forming process of 3D branching pattern, and also interpret the behaviors of side branching and tip splitting as the stalk growing, through 3D visualization of numerical simulation.

  18. An algorithm based on OmniView technology to reconstruct sagittal and coronal planes of the fetal brain from volume datasets acquired by three-dimensional ultrasound.

    PubMed

    Rizzo, G; Capponi, A; Pietrolucci, M E; Capece, A; Aiello, E; Mammarella, S; Arduini, D

    2011-08-01

    To describe a novel algorithm, based on the new display technology 'OmniView', developed to visualize diagnostic sagittal and coronal planes of the fetal brain from volumes obtained by three-dimensional (3D) ultrasonography. We developed an algorithm to image standard neurosonographic planes by drawing dissecting lines through the axial transventricular view of 3D volume datasets acquired transabdominally. The algorithm was tested on 106 normal fetuses at 18-24 weeks of gestation and the visualization rates of brain diagnostic planes were evaluated by two independent reviewers. The algorithm was also applied to nine cases with proven brain defects. The two reviewers, using the algorithm on normal fetuses, found satisfactory images with visualization rates ranging between 71.7% and 96.2% for sagittal planes and between 76.4% and 90.6% for coronal planes. The agreement rate between the two reviewers, as expressed by Cohen's kappa coefficient, was > 0.93 for sagittal planes and > 0.89 for coronal planes. All nine abnormal volumes were identified by a single observer from among a series including normal brains, and eight of these nine cases were diagnosed correctly. This novel algorithm can be used to visualize standard sagittal and coronal planes in the fetal brain. This approach may simplify the examination of the fetal brain and reduce dependency of success on operator skill. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  19. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    PubMed

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise pinwheel centers in the surface orientation map. The results were confirmed by comparisons with conventional optical imaging and electrophysiological recordings.

  20. Stereoscopy and the Human Visual System

    PubMed Central

    Banks, Martin S.; Read, Jenny C. A.; Allison, Robert S.; Watt, Simon J.

    2012-01-01

    Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion. PMID:23144596

  1. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  2. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  3. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  4. Severe mitral regurgitation due to mitral leaflet aneurysm diagnosed by three-dimensional transesophageal echocardiography: a case report.

    PubMed

    Konishi, Takao; Funayama, Naohiro; Yamamoto, Tadashi; Hotta, Daisuke; Kikuchi, Kenjiro; Ohori, Katsumi; Nishihara, Hiroshi; Tanaka, Shinya

    2016-11-22

    A small mitral valve aneurysm (MVA) presenting as severe mitral regurgitation (MR) is uncommon. A 47-year-old man with a history of hypertension complained of exertional chest discomfort. A transthoracic echocardiogram (TTE) revealed the presence of MR and prolapse of the posterior leaflet. A 6-mm in diameter MVA, not clearly visualized by TTE, was detected on the posterior leaflet on a three-dimensional (3D) transesophageal echocardiography (TEE). The patient underwent uncomplicated triangular resection of P2 and mitral valve annuloplasty, and was discharged from postoperative rehabilitation, 2 weeks after the operation. Histopathology of the excised leaflet showed myxomatous changes without infective vegetation or signs of rheumatic heart disease. A small, isolated MVA is a cause of severe MR, which might be overlooked and, therefore, managed belatedly. 3D TEE was helpful in imaging its morphologic details.

  5. Reproducibility of three dimensional digital preoperative planning for the osteosynthesis of distal radius fractures.

    PubMed

    Yoshii, Yuichi; Kusakabe, Takuya; Akita, Kenichi; Tung, Wen Lin; Ishii, Tomoo

    2017-12-01

    A three-dimensional (3D) digital preoperative planning system for the osteosynthesis of distal radius fractures was developed for clinical practice. To assess the usefulness of the 3D planning for osteosynthesis, we evaluated the reproducibility of the reduction shapes and selected implants in the patients with distal radius fractures. Twenty wrists of 20 distal radius fracture patients who underwent osteosynthesis using volar locking plates were evaluated. The 3D preoperative planning was performed prior to each surgery. Four surgeons conducted the surgeries. The surgeons performed the reduction and the placement of the plate while comparing images between the preoperative plan and fluoroscopy. Preoperative planning and postoperative reductions were compared by measuring volar tilt and radial inclination of the 3D images. Intra-class correlation coefficients (ICCs) of the volar tilt and radial inclination were evaluated. For the implant choices, the ICCs for the screw lengths between the preoperative plan and the actual choices were evaluated. The ICCs were 0.644 (p < 0.01) and 0.625 (p < 0.01) for the volar tilt and radial inclination in the 3D measurements, respectively. The planned size of plate was used in all of the patients. The ICC for the screw length between preoperative planning and actual choice was 0.860 (p < 0.01). Good reproducibility for the reduction shape and excellent reproducibility for the implant choices were achieved using 3D preoperative planning for distal radius fracture. Three-dimensional digital planning was useful to visualize the reduction process and choose a proper implant for distal radius fractures. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2646-2651, 2017. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  6. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  7. Design of a Matrix Transducer for Three-Dimensional Second Harmonic Transesophageal Echocardiography

    NASA Astrophysics Data System (ADS)

    Blaak, Sandra; van Neer, Paul L. M. J.; Prins, Christian; Bosch, Johan G.; Lancée, Charles T.; van der Steen, Antonius F. W.; de Jong, Nico

    Three-dimensional (3D) echocardiography visualizes the 3D anatomy and function of the heart. For 3D imaging an ultrasound matrix of several thousands of elements is required. To connect the matrix to an external imaging system, smart signal processing with integrated circuitry in the tip of the TEE probe is required for channel reduction. To separate the low voltage integrated receive circuitry from the high voltages required for transmission, our design features a separate transmit and receive subarray. In this study we focus on the transmit subarray. A 3D model of an individual element was developed using the finite element method (FEM). The model was validated by laser interferometer and acoustic measurements. Measurement and simulations matched well. The maximum transmit transfer was 3 nm/V at 2.4 MHz for both the FEM simulation of an element in air and the laser interferometer measurement. The FEM simulation of an element in water resulted in a maximum transfer of 43 kPa/V at 2.3 MHz and the acoustic measurement in 55 kPa/V at 2.5 MHz. The maximum pressure is ~1 MPa/120Vpp, which is sufficient pressure for second harmonic imaging. The proposed design of the transmit subarray is suitable for its role in a 3D 2H TEE probe.

  8. Three- and Two- Dimensional Simulations of Re-shock Experiments at High Energy Densities at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Raman, Kumar; MacLaren, Stephan; Huntington, Channing; Nagel, Sabrina

    2016-10-01

    We present simulations of recent high-energy-density (HED) re-shock experiments on the National Ignition Facility (NIF). The experiments study the Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) instability growth that occurs after successive shocks transit a sinusoidally-perturbed interface between materials of different densities. The shock tube is driven at one or both ends using indirect-drive laser cavities or hohlraums. X-ray area-backlit imaging is used to visualize the growth at different times. Our simulations are done with the three-dimensional, radiation hydrodynamics code ARES, developed at LLNL. We show the instabilitygrowth rate, inferred from the experimental radiographs, agrees well with our 2D and 3D simulations. We also discuss some 3D geometrical effects, suggested by our simulations, which could deteriorate the images at late times, unless properly accounted for in the experiment design. Work supported by U.S. Department of Energy under Contract DE- AC52-06NA27279. LLNL-ABS-680789.

  9. Application of Mathematical and Three-Dimensional Computer Modeling Tools in the Planning of Processes of Fuel and Energy Complexes

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Nikolaeva, Evgenia; Cehlár, Michal

    2017-11-01

    This work aims to investigate the effectiveness of mathematical and three-dimensional computer modeling tools in the planning of processes of fuel and energy complexes at the planning and design phase of a thermal power plant (TPP). A solution for purification of gas emissions at the design development phase of waste treatment systems is proposed employing mathematical and three-dimensional computer modeling - using the E-nets apparatus and the development of a 3D model of the future gas emission purification system. Which allows to visualize the designed result, to select and scientifically prove economically feasible technology, as well as to ensure the high environmental and social effect of the developed waste treatment system. The authors present results of a treatment of planned technological processes and the system for purifying gas emissions in terms of E-nets. using mathematical modeling in the Simulink application. What allowed to create a model of a device from the library of standard blocks and to perform calculations. A three-dimensional model of a system for purifying gas emissions has been constructed. It allows to visualize technological processes and compare them with the theoretical calculations at the design phase of a TPP and. if necessary, make adjustments.

  10. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  11. Three-dimensional magnetic resonance imaging based on time-of-flight magnetic resonance angiography for superficial cerebral arteriovenous malformation--technical note.

    PubMed

    Murata, Takahiro; Horiuchi, Tetsuyoshi; Rahmah, Nunung Nur; Sakai, Keiichi; Hongo, Kazuhiro

    2011-01-01

    Direct surgery remains important for the treatment of superficial cerebral arteriovenous malformation (AVM). Surgical planning on the basis of careful analysis from various neuroimaging modalities can aid in resection of superficial AVM with favorable outcome. Three-dimensional (3D) magnetic resonance (MR) imaging reconstructed from time-of-flight (TOF) MR angiography was developed as an adjunctive tool for surgical planning of superficial AVM. 3-T TOF MR imaging without contrast medium was performed preoperatively in patients with superficial AVM. The images were imported into OsiriX imaging software and the 3D reconstructed MR image was produced using the volume rendering method. This 3D MR image could clearly visualize the surface angioarchitecture of the AVM with the surrounding brain on a single image, and clarified feeding arteries including draining veins and the relationship with sulci or fissures surrounding the nidus. 3D MR image of the whole AVM angioarchitecture was also displayed by skeletonization of the surrounding brain. Preoperative 3D MR image corresponded to the intraoperative view. Feeders on the brain surface were easily confirmed and obliterated during surgery, with the aid of the 3D MR images. 3D MR imaging for surgical planning of superficial AVM is simple and noninvasive to perform, enhances intraoperative orientation, and is helpful for successful resection.

  12. Laser Scanning Technology as Part of a Comprehensive Condition Assessment for Covered Bridges

    Treesearch

    Brian K. Brashaw; Samuel Anderson; Robert J. Ross

    2015-01-01

    New noncontact technologies have been developed and implemented for determining as-built condition and current dimensions for a wide variety of objects and buildings. In this study, a three-dimensional laser scanner was used to determine the dimensions and visual condition of a historic bridge in the Amnicon Falls State Park in northern Wisconsin. 3D scanning provides...

  13. Three-Dimensional Effects of Artificial Mixing in a Shallow Drinking-Water Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Shengyang; Little, John C.; Carey, Cayelan C.; McClure, Ryan P.; Lofton, Mary E.; Lei, Chengwang

    2018-01-01

    Studies that examine the effects of artificial mixing for water-quality mitigation in lakes and reservoirs often view a water column with a one-dimensional (1-D) perspective (e.g., homogenized epilimnetic and hypolimnetic layers). Artificial mixing in natural water bodies, however, is inherently three dimensional (3-D). Using a 3-D approach experimentally and numerically, the present study visualizes thermal structure and analyzes constituent transport under the influence of artificial mixing in a shallow drinking-water reservoir. The purpose is to improve the understanding of artificial mixing, which may help to better design and operate mixing systems. In this reservoir, a side-stream supersaturation (SSS) hypolimnetic oxygenation system and an epilimnetic bubble-plume mixing (EM) system were concurrently deployed in the deep region. The present study found that, while the mixing induced by the SSS system does not have a distinct 3-D effect on the thermal structure, epilimnetic mixing by the EM system causes 3-D heterogeneity. In the experiments, epilimnetic mixing deepened the lower metalimnetic boundary near the diffuser by about 1 m, with 55% reduction of the deepening rate at 120 m upstream of the diffuser. In a tracer study using a 3-D hydrodynamic model, the operational flow rate of the EM system is found to be an important short-term driver of constituent transport in the reservoir, whereas the duration of the EM system operation is the dominant long-term driver. The results suggest that artificial mixing substantially alters both 3-D thermal structure and constituent transport, and thus needs to be taken into account for reservoir management.

  14. Three-dimensional scanner based on fringe projection

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik

    1995-07-01

    This article presents a way of scanning 3D objects using noninvasive and contact loss techniques. The principle is to project parallel fringes on an object and then to record the object at two viewing angles. With an appropriate treatment one can reconstruct the 3D object even when it has no symmetry planes. The 3D surface data are available immediately in digital form for computer visualization and for analysis software tools. The optical setup for recording the object, the data extraction and treatment, and the reconstruction of the object are reported and commented on. Application is proposed for reconstructive/cosmetic surgery, CAD, animation, and research.

  15. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  16. Multiplanar visualization in 3D transthoracic echocardiography for precise delineation of mitral valve pathology.

    PubMed

    Kuppahally, Suman S; Paloma, Allan; Craig Miller, D; Schnittger, Ingela; Liang, David

    2008-01-01

    A novel multiplanar reformatting (MPR) technique in three-dimensional transthoracic echocardiography (3D TTE) was used to precisely localize the prolapsed lateral segment of posterior mitral valve leaflet in a patient symptomatic with mitral valve prolapse (MVP) and moderate mitral regurgitation (MR) before undergoing mitral valve repair surgery. Transesophageal echocardiography was avoided based on the findings of this new technique by 3D TTE. It was noninvasive, quick, reproducible and reliable. Also, it did not need the time-consuming reconstruction of multiple cardiac images. Mitral valve repair surgery was subsequently performed based on the MPR findings and corroborated the findings from the MPR examination.

  17. Recent development on computer aided tissue engineering--a review.

    PubMed

    Sun, Wei; Lal, Pallavi

    2002-02-01

    The utilization of computer-aided technologies in tissue engineering has evolved in the development of a new field of computer-aided tissue engineering (CATE). This article reviews recent development and application of enabling computer technology, imaging technology, computer-aided design and computer-aided manufacturing (CAD and CAM), and rapid prototyping (RP) technology in tissue engineering, particularly, in computer-aided tissue anatomical modeling, three-dimensional (3-D) anatomy visualization and 3-D reconstruction, CAD-based anatomical modeling, computer-aided tissue classification, computer-aided tissue implantation and prototype modeling assisted surgical planning and reconstruction.

  18. Investigating the capabilities of semantic enrichment of 3D CityEngine data

    NASA Astrophysics Data System (ADS)

    Solou, Dimitra; Dimopoulou, Efi

    2016-08-01

    In recent years the development of technology and the lifting of several technical limitations, has brought the third dimension to the fore. The complexity of urban environments and the strong need for land administration, intensify the need of using a three-dimensional cadastral system. Despite the progress in the field of geographic information systems and 3D modeling techniques, there is no fully digital 3D cadastre. The existing geographic information systems and the different methods of three-dimensional modeling allow for better management, visualization and dissemination of information. Nevertheless, these opportunities cannot be totally exploited because of deficiencies in standardization and interoperability in these systems. Within this context, CityGML was developed as an international standard of the Open Geospatial Consortium (OGC) for 3D city models' representation and exchange. CityGML defines geometry and topology for city modeling, also focusing on semantic aspects of 3D city information. The scope of CityGML is to reach common terminology, also addressing the imperative need for interoperability and data integration, taking into account the number of available geographic information systems and modeling techniques. The aim of this paper is to develop an application for managing semantic information of a model generated based on procedural modeling. The model was initially implemented in CityEngine ESRI's software, and then imported to ArcGIS environment. Final goal was the original model's semantic enrichment and then its conversion to CityGML format. Semantic information management and interoperability seemed to be feasible by the use of the 3DCities Project ESRI tools, since its database structure ensures adding semantic information to the CityEngine model and therefore automatically convert to CityGML for advanced analysis and visualization in different application areas.

  19. Neural dynamics of 3-D surface perception: figure-ground separation and lightness perception.

    PubMed

    Kelly, F; Grossberg, S

    2000-11-01

    This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding figures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction "detectors" exist. Both boundary and surface representations of occluded objects may be amodally completed, whereas the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Münker-White, Benary cross, and checkerboard percepts.

  20. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  1. Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules

    PubMed Central

    Qiu, Fangtu T.; von der Heydt, Rüdiger

    2006-01-01

    Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (gestalt factors). These are combined in single neurons so that the ‘near’ side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays gestalt factors influence the responses and can enhance or null the stereoscopic depth information. PMID:15996555

  2. A 3D visualization and simulation of the individual human jaw.

    PubMed

    Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo

    2003-01-01

    A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.

  3. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations.

    PubMed

    Freud, Erez; Macdonald, Scott N; Chen, Juan; Quinlan, Derek J; Goodale, Melvyn A; Culham, Jody C

    2018-01-01

    In the current era of touchscreen technology, humans commonly execute visually guided actions directed to two-dimensional (2D) images of objects. Although real, three-dimensional (3D), objects and images of the same objects share high degree of visual similarity, they differ fundamentally in the actions that can be performed on them. Indeed, previous behavioral studies have suggested that simulated grasping of images relies on different representations than actual grasping of real 3D objects. Yet the neural underpinnings of this phenomena have not been investigated. Here we used functional magnetic resonance imaging (fMRI) to investigate how brain activation patterns differed for grasping and reaching actions directed toward real 3D objects compared to images. Multivoxel Pattern Analysis (MVPA) revealed that the left anterior intraparietal sulcus (aIPS), a key region for visually guided grasping, discriminates between both the format in which objects were presented (real/image) and the motor task performed on them (grasping/reaching). Interestingly, during action planning, the representations of real 3D objects versus images differed more for grasping movements than reaching movements, likely because grasping real 3D objects involves fine-grained planning and anticipation of the consequences of a real interaction. Importantly, this dissociation was evident in the planning phase, before movement initiation, and was not found in any other regions, including motor and somatosensory cortices. This suggests that the dissociable representations in the left aIPS were not based on haptic, motor or proprioceptive feedback. Together, these findings provide novel evidence that actions, particularly grasping, are affected by the realness of the target objects during planning, perhaps because real targets require a more elaborate forward model based on visual cues to predict the consequences of real manipulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  5. OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.

    PubMed

    Zhou, Guangyan; Xia, Jianguo

    2018-06-07

    Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.

  6. Experimental demonstration of the microscopic origin of circular dichroism in two-dimensional metamaterials

    PubMed Central

    Khanikaev, A. B.; Arju, N.; Fan, Z.; Purtseladze, D.; Lu, F.; Lee, J.; Sarriugarte, P.; Schnell, M.; Hillenbrand, R.; Belkin, M. A.; Shvets, G.

    2016-01-01

    Optical activity and circular dichroism are fascinating physical phenomena originating from the interaction of light with chiral molecules or other nano objects lacking mirror symmetries in three-dimensional (3D) space. While chiral optical properties are weak in most of naturally occurring materials, they can be engineered and significantly enhanced in synthetic optical media known as chiral metamaterials, where the spatial symmetry of their building blocks is broken on a nanoscale. Although originally discovered in 3D structures, circular dichroism can also emerge in a two-dimensional (2D) metasurface. The origin of the resulting circular dichroism is rather subtle, and is related to non-radiative (Ohmic) dissipation of the constituent metamolecules. Because such dissipation occurs on a nanoscale, this effect has never been experimentally probed and visualized. Using a suite of recently developed nanoscale-measurement tools, we establish that the circular dichroism in a nanostructured metasurface occurs due to handedness-dependent Ohmic heating. PMID:27329108

  7. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  8. 3D documentation of footwear impressions and tyre tracks in snow with high resolution optical surface scanning.

    PubMed

    Buck, Ursula; Albertini, Nicola; Naether, Silvio; Thali, Michael J

    2007-09-13

    The three-dimensional documentation of footwear and tyre impressions in snow offers an opportunity to capture additional fine detail for the identification as present photographs. For this approach, up to now, different casting methods have been used. Casting of footwear impressions in snow has always been a difficult assignment. This work demonstrates that for the three-dimensional documentation of impressions in snow the non-destructive method of 3D optical surface scanning is suitable. The new method delivers more detailed results of higher accuracy than the conventional casting techniques. The results of this easy to use and mobile 3D optical surface scanner were very satisfactory in different meteorological and snow conditions. The method is also suitable for impressions in soil, sand or other materials. In addition to the side by side comparison, the automatic comparison of the 3D models and the computation of deviations and accuracy of the data simplify the examination and delivers objective and secure results. The results can be visualized efficiently. Data exchange between investigating authorities at a national or an international level can be achieved easily with electronic data carriers.

  9. Three-Dimensional Geologic Framework Model for a Karst Aquifer System, Hasty and Western Grove Quadrangles, Northern Arkansas

    USGS Publications Warehouse

    Turner, Kenzie J.; Hudson, Mark R.; Murray, Kyle E.; Mott, David N.

    2007-01-01

    Understanding ground-water flow in a karst aquifer benefits from a detailed conception of the three-dimensional (3D) geologic framework. Traditional two-dimensional products, such as geologic maps, cross-sections, and structure contour maps, convey a mental picture of the area but a stronger conceptualization can be achieved by constructing a digital 3D representation of the stratigraphic and structural geologic features. In this study, a 3D geologic model was created to better understand a karst aquifer system in the Buffalo National River watershed in northern Arkansas. The model was constructed based on data obtained from recent, detailed geologic mapping for the Hasty and Western Grove 7.5-minute quadrangles. The resulting model represents 11 stratigraphic zones of Ordovician, Mississippian, and Pennsylvanian age. As a result of the highly dissected topography, stratigraphic and structural control from geologic contacts and interpreted structure contours were sufficient for effectively modeling the faults and folds in the model area. Combined with recent dye-tracing studies, the 3D framework model is useful for visualizing the various geologic features and for analyzing the potential control they exert on the ground-water flow regime. Evaluation of the model, by comparison to published maps and cross-sections, indicates that the model accurately reproduces both the surface geology and subsurface geologic features of the area.

  10. The use of computed tomographic three-dimensional reconstructions to develop instructional models for equine pelvic ultrasonography.

    PubMed

    Whitcomb, Mary Beth; Doval, John; Peters, Jason

    2011-01-01

    Ultrasonography has gained increased utility to diagnose pelvic fractures in horses; however, internal pelvic contours can be difficult to appreciate from external palpable landmarks. We developed three-dimensional (3D) simulations of the pelvic ultrasonographic examination to assist with translation of pelvic contours into two-dimensional (2D) images. Contiguous 1mm transverse computed tomography (CT) images were acquired through an equine femur and hemipelvis using a single slice helical scanner. 3D surface models were created using a DICOM reader and imported into a 3D modeling and animation program. The bone models were combined with a purchased 3D horse model and the skin made translucent to visualize pelvic surface contours. 3D models of ultrasound transducers were made from reference photos, and a thin sector shape was created to depict the ultrasound beam. Ultrasonographic examinations were simulated by moving transducers on the skin surface and rectally to produce images of pelvic structures. Camera angles were manipulated to best illustrate the transducer-beam-bone interface. Fractures were created in multiple configurations. Animations were exported as QuickTime movie files for use in presentations coupled with corresponding ultrasound videoclips. 3D models provide a link between ultrasonographic technique and image generation by depicting the interaction of the transducer, ultrasound beam, and structure of interest. The horse model was important to facilitate understanding of the location of pelvic structures relative to the skin surface. While CT acquisition time was brief, manipulation within the 3D software program was time intensive. Results were worthwhile from an instructional standpoint based on user feedback. © 2011 Veterinary Radiology & Ultrasound.

  11. AntigenMap 3D: an online antigenic cartography resource.

    PubMed

    Barnett, J Lamar; Yang, Jialiang; Cai, Zhipeng; Zhang, Tong; Wan, Xiu-Feng

    2012-05-01

    Antigenic cartography is a useful technique to visualize and minimize errors in immunological data by projecting antigens to 2D or 3D cartography. However, a 2D cartography may not be sufficient to capture the antigenic relationship from high-dimensional immunological data. AntigenMap 3D presents an online, interactive, and robust 3D antigenic cartography construction and visualization resource. AntigenMap 3D can be applied to identify antigenic variants and vaccine strain candidates for pathogens with rapid antigenic variations, such as influenza A virus. http://sysbio.cvm.msstate.edu/AntigenMap3D

  12. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  13. [Real-time three-dimensional (4D) ultrasound-guided prostatic biopsies on a phantom. Comparative study versus 2D guidance].

    PubMed

    Long, Jean-Alexandre; Daanen, Vincent; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc

    2007-11-01

    The objective of this study was to determine the added value of real-time three-dimensional (4D) ultrasound guidance of prostatic biopsies on a prostate phantom in terms of the precision of guidance and distribution. A prostate phantom was constructed. A real-time 3D ultrasonograph connected to a transrectal 5.9 MHz volumic transducer was used. Fourteen operators performed 336 biopsies with 2D guidance then 4D guidance according to a 12-biopsy protocol. Biopsy tracts were modelled by segmentation in a 3D ultrasound volume. Specific software allowed visualization of biopsy tracts in the reference prostate and evaluated the zone biopsied. A comparative study was performed to determine the added value of 4D guidance compared to 2D guidance by evaluating the precision of entry points and target points. The distribution was evaluated by measuring the volume investigated and by a redundancy ratio of the biopsy points. The precision of the biopsy protocol was significantly improved by 4D guidance (p = 0.037). No increase of the biopsy volume and no improvement of the distribution of biopsies were observed with 4D compared to 2D guidance. The real-time 3D ultrasound-guided prostate biopsy technique on a phantom model appears to improve the precision and reproducibility of a biopsy protocol, but the distribution of biopsies does not appear to be improved.

  14. SSVEP-based BCI for manipulating three-dimensional contents and devices

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Cho, Sungjin; Whang, Mincheol; Ju, Byeong-Kwon; Park, Min-Chul

    2012-06-01

    Brain Computer Interface (BCI) studies have been done to help people manipulate electronic devices in a 2D space but less has been done for a vigorous 3D environment. The purpose of this study was to investigate the possibility of applying Steady State Visual Evoked Potentials (SSVEPs) to a 3D LCD display. Eight subjects (4 females) ranging in age between 20 to 26 years old participated in the experiment. They performed simple navigation tasks on a simple 2D space and virtual environment with/without 3D flickers generated by a Flim-Type Patterned Retarder (FPR). The experiments were conducted in a counterbalanced order. The results showed that 3D stimuli enhanced BCI performance, but no significant effects were found due to the small number of subjects. Visual fatigue that might be evoked by 3D stimuli was negligible in this study. The proposed SSVEP BCI combined with 3D flickers can allow people to control home appliances and other equipment such as wheelchairs, prosthetics, and orthotics without encountering dangerous situations that may happen when using BCIs in real world. 3D stimuli-based SSVEP BCI would motivate people to use 3D displays and vitalize the 3D related industry due to its entertainment value and high performance.

  15. The Visible Cement Data Set

    PubMed Central

    Bentz, Dale P.; Mizell, Symoane; Satterfield, Steve; Devaney, Judith; George, William; Ketcham, Peter; Graham, James; Porterfield, James; Quenard, Daniel; Vallee, Franck; Sallee, Hebert; Boller, Elodie; Baruchel, Jose

    2002-01-01

    With advances in x-ray microtomography, it is now possible to obtain three-dimensional representations of a material’s microstructure with a voxel size of less than one micrometer. The Visible Cement Data Set represents a collection of 3-D data sets obtained using the European Synchrotron Radiation Facility in Grenoble, France in September 2000. Most of the images obtained are for hydrating portland cement pastes, with a few data sets representing hydrating Plaster of Paris and a common building brick. All of these data sets are being made available on the Visible Cement Data Set website at http://visiblecement.nist.gov. The website includes the raw 3-D datafiles, a description of the material imaged for each data set, example two-dimensional images and visualizations for each data set, and a collection of C language computer programs that will be of use in processing and analyzing the 3-D microstructural images. This paper provides the details of the experiments performed at the ESRF, the analysis procedures utilized in obtaining the data set files, and a few representative example images for each of the three materials investigated. PMID:27446723

  16. Scientific Visualization Tools for Enhancement of Undergraduate Research

    NASA Astrophysics Data System (ADS)

    Rodriguez, W. J.; Chaudhury, S. R.

    2001-05-01

    Undergraduate research projects that utilize remote sensing satellite instrument data to investigate atmospheric phenomena pose many challenges. A significant challenge is processing large amounts of multi-dimensional data. Remote sensing data initially requires mining; filtering of undesirable spectral, instrumental, or environmental features; and subsequently sorting and reformatting to files for easy and quick access. The data must then be transformed according to the needs of the investigation(s) and displayed for interpretation. These multidimensional datasets require views that can range from two-dimensional plots to multivariable-multidimensional scientific visualizations with animations. Science undergraduate students generally find these data processing tasks daunting. Generally, researchers are required to fully understand the intricacies of the dataset and write computer programs or rely on commercially available software, which may not be trivial to use. In the time that undergraduate researchers have available for their research projects, learning the data formats, programming languages, and/or visualization packages is impractical. When dealing with large multi-dimensional data sets appropriate Scientific Visualization tools are imperative in allowing students to have a meaningful and pleasant research experience, while producing valuable scientific research results. The BEST Lab at Norfolk State University has been creating tools for multivariable-multidimensional analysis of Earth Science data. EzSAGE and SAGE4D have been developed to sort, analyze and visualize SAGE II (Stratospheric Aerosol and Gas Experiment) data with ease. Three- and four-dimensional visualizations in interactive environments can be produced. EzSAGE provides atmospheric slices in three-dimensions where the researcher can change the scales in the three-dimensions, color tables and degree of smoothing interactively to focus on particular phenomena. SAGE4D provides a navigable four-dimensional interactive environment. These tools allow students to make higher order decisions based on large multidimensional sets of data while diminishing the level of frustration that results from dealing with the details of processing large data sets.

  17. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.

  18. Design of a 3-dimensional visual illusion speed reduction marking scheme.

    PubMed

    Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei

    2017-03-01

    To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.

  19. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  20. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  1. High-Resolution Isotropic Three-Dimensional MR Imaging of the Extraforaminal Segments of the Cranial Nerves.

    PubMed

    Wen, Jessica; Desai, Naman S; Jeffery, Dean; Aygun, Nafi; Blitz, Ari

    2018-02-01

    High-resolution isotropic 3-dimensional (D) MR imaging with and without contrast is now routinely used for imaging evaluation of cranial nerve anatomy and pathologic conditions. The anatomic details of the extraforaminal segments are well-visualized on these techniques. A wide range of pathologic entities may cause enhancement or displacement of the nerve, which is now visible to an extent not available on standard 2D imaging. This article highlights the anatomy of extraforaminal segments of the cranial nerves and uses select cases to illustrate the utility and power of these sequences, with a focus on constructive interference in steady-state. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Alam, Salim; Pinkney, Paul

    1992-01-01

    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

  3. Goal-Directed Grasping: The Dimensional Properties of an Object Influence the Nature of the Visual Information Mediating Aperture Shaping

    ERIC Educational Resources Information Center

    Holmes, Scott A.; Heath, Matthew

    2013-01-01

    An issue of continued debate in the visuomotor control literature surrounds whether a 2D object serves as a representative proxy for a 3D object in understanding the nature of the visual information supporting grasping control. In an effort to reconcile this issue, we examined the extent to which aperture profiles for grasping 2D and 3D objects…

  4. The relationship between three-dimensional imaging and group decision making: an exploratory study.

    PubMed

    Litynski, D M; Grabowski, M; Wallace, W A

    1997-07-01

    This paper describes an empirical investigation of the effect of three dimensional (3-D) imaging on group performance in a tactical planning task. The objective of the study is to examine the role that stereoscopic imaging can play in supporting face-to-face group problem solving and decision making-in particular, the alternative generation and evaluation processes in teams. It was hypothesized that with the stereoscopic display, group members would better visualize the information concerning the task environment, producing open communication and information exchanges. The experimental setting was a tactical command and control task, and the quality of the decisions and nature of the group decision process were investigated with three treatments: 1) noncomputerized, i.e., topographic maps with depth cues; 2) two-dimensional (2-D) imaging; and 3) stereoscopic imaging. The results were mixed on group performance. However, those groups with the stereoscopic displays generated more alternatives and spent less time on evaluation. In addition, the stereoscopic decision aid did not interfere with the group problem solving and decision-making processes. The paper concludes with a discussion of potential benefits, and the need to resolve demonstrated weaknesses of the technology.

  5. Three-dimensional reconstruction of rat knee joint using episcopic fluorescence image capture.

    PubMed

    Takaishi, R; Aoyama, T; Zhang, X; Higuchi, S; Yamada, S; Takakuwa, T

    2014-10-01

    Development of the knee joint was morphologically investigated, and the process of cavitation was analyzed by using episcopic fluorescence image capture (EFIC) to create spatial and temporal three-dimensional (3D) reconstructions. Knee joints of Wister rat embryos between embryonic day (E)14 and E20 were investigated. Samples were sectioned and visualized using an EFIC. Then, two-dimensional image stacks were reconstructed using OsiriX software, and 3D reconstructions were generated using Amira software. Cavitations of the knee joint were constructed from five divided portions. Cavity formation initiated at multiple sites at E17; among them, the femoropatellar cavity (FPC) was the first. Cavitations of the medial side preceded those of the lateral side. Each cavity connected at E20 when cavitations around the anterior cruciate ligament (ACL) and posterior cruciate ligament (PCL) were completed. Cavity formation initiated from six portions. In each portion, development proceeded asymmetrically. These results concerning anatomical development of the knee joint using EFIC contribute to a better understanding of the structural feature of the knee joint. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  6. 3D reconstruction of internal structure of animal body using near-infrared light

    NASA Astrophysics Data System (ADS)

    Tran, Trung Nghia; Yamamoto, Kohei; Namita, Takeshi; Kato, Yuji; Shimizu, Koichi

    2014-03-01

    To realize three-dimensional (3D) optical imaging of the internal structure of animal body, we have developed a new technique to reconstruct CT images from two-dimensional (2D) transillumination images. In transillumination imaging, the image is blurred due to the strong scattering in the tissue. We had developed a scattering suppression technique using the point spread function (PSF) for a fluorescent light source in the body. In this study, we have newly proposed a technique to apply this PSF for a light source to the image of unknown light-absorbing structure. The effectiveness of the proposed technique was examined in the experiments with a model phantom and a mouse. In the phantom experiment, the absorbers were placed in the tissue-equivalent medium to simulate the light-absorbing organs in mouse body. Near-infrared light was illuminated from one side of the phantom and the image was recorded with CMOS camera from another side. Using the proposed techniques, the scattering effect was efficiently suppressed and the absorbing structure can be visualized in the 2D transillumination image. Using the 2D images obtained in many different orientations, we could reconstruct the 3D image. In the mouse experiment, an anesthetized mouse was held in an acrylic cylindrical holder. We can visualize the internal organs such as kidneys through mouse's abdomen using the proposed technique. The 3D image of the kidneys and a part of the liver were reconstructed. Through these experimental studies, the feasibility of practical 3D imaging of the internal light-absorbing structure of a small animal was verified.

  7. Three-dimensional user interfaces for scientific visualization

    NASA Technical Reports Server (NTRS)

    Vandam, Andries

    1995-01-01

    The main goal of this project is to develop novel and productive user interface techniques for creating and managing visualizations of computational fluid dynamics (CFD) datasets. We have implemented an application framework in which we can visualize computational fluid dynamics user interfaces. This UI technology allows users to interactively place visualization probes in a dataset and modify some of their parameters. We have also implemented a time-critical scheduling system which strives to maintain a constant frame-rate regardless of the number of visualization techniques. In the past year, we have published parts of this research at two conferences, the research annotation system at Visualization 1994, and the 3D user interface at UIST 1994. The real-time scheduling system has been submitted to SIGGRAPH 1995 conference. Copies of these documents are included with this report.

  8. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  9. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.

  10. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  11. Minimal invasive complete excision of benign breast tumors using a three-dimensional ultrasound-guided mammotome vacuum device.

    PubMed

    Baez, E; Huber, A; Vetter, M; Hackelöer, B-J

    2003-03-01

    The aim of this study was to evaluate the use of three-dimensional (3D) ultrasonography in the complete excision of benign breast tumors using ultrasound-guided vacuum-assisted core-needle biopsy (Mammotome). A protocol for the management of benign breast tumors is proposed. Twenty consecutive patients with sonographically benign breast lesions underwent 3D ultrasound-guided mammotome biopsy under local anesthesia. The indication for surgical biopsy was a solid lesion with benign characteristics on both two-dimensional (2D) and 3D ultrasound imaging, increasing in size over time or causing pain or irritation. Preoperatively, the size of the lesion was assessed using 2D and 3D volumetry. During vacuum biopsy the needle was visualized sonographically in all three dimensions, including the coronal plane. Excisional biopsy was considered complete when no residual tumor tissue could be seen sonographically. Ultrasonographic follow-up examinations were performed on the following day and 3-6 months later to assess residual tissue and scarring. All lesions were histologically benign. Follow-up examinations revealed complete excision of all lesions of < 1.5 mL in volume as assessed by 3D volumetry. 3D ultrasonographic volume assessment was more accurate than 2D using the ellipsoid formula or assessment of the maximum diameter for the prediction of complete excision of the tumor. No bleeding or infections occurred postoperatively and no scarring was seen ultrasonographically on follow-up examinations. Ultrasound-guided vacuum-assisted biopsy allows complete excision of benign breast lesions that are

  12. Motion processing with two eyes in three dimensions.

    PubMed

    Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2011-02-11

    The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.

  13. Virtual reality in radiology: virtual intervention

    NASA Astrophysics Data System (ADS)

    Harreld, Michael R.; Valentino, Daniel J.; Duckwiler, Gary R.; Lufkin, Robert B.; Karplus, Walter J.

    1995-04-01

    Intracranial aneurysms are the primary cause of non-traumatic subarachnoid hemorrhage. Morbidity and mortality remain high even with current endovascular intervention techniques. It is presently impossible to identify which aneurysms will grow and rupture, however hemodynamics are thought to play an important role in aneurysm development. With this in mind, we have simulated blood flow in laboratory animals using three dimensional computational fluid dynamics software. The data output from these simulations is three dimensional, complex and transient. Visualization of 3D flow structures with standard 2D display is cumbersome, and may be better performed using a virtual reality system. We are developing a VR-based system for visualization of the computed blood flow and stress fields. This paper presents the progress to date and future plans for our clinical VR-based intervention simulator. The ultimate goal is to develop a software system that will be able to accurately model an aneurysm detected on clinical angiography, visualize this model in virtual reality, predict its future behavior, and give insight into the type of treatment necessary. An associated database will give historical and outcome information on prior aneurysms (including dynamic, structural, and categorical data) that will be matched to any current case, and assist in treatment planning (e.g., natural history vs. treatment risk, surgical vs. endovascular treatment risks, cure prediction, complication rates).

  14. Three-dimensional particle tracking velocimetry using dynamic vision sensors

    NASA Astrophysics Data System (ADS)

    Borer, D.; Delbruck, T.; Rösgen, T.

    2017-12-01

    A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.

  15. Volumetric three-dimensional display system with rasterization hardware

    NASA Astrophysics Data System (ADS)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  16. Pulse Phase Dynamic Thermal Tomography Investigation on the Defects of the Solid-Propellant Missile Engine Cladding Layer

    NASA Astrophysics Data System (ADS)

    Peng, Wei; Wang, Fei; Liu, Jun-yan; Xiao, Peng; Wang, Yang; Dai, Jing-min

    2018-04-01

    Pulse phase dynamic thermal tomography (PP-DTT) was introduced as a nondestructive inspection technique to detect the defects of the solid-propellant missile engine cladding layer. One-dimensional thermal wave mathematical model stimulated by pulse signal was developed and employed to investigate the thermal wave transmission characteristics. The pulse phase algorithm was used to extract the thermal wave characteristic of thermal radiation. Depth calibration curve was obtained by fuzzy c-means algorithm. Moreover, PP-DTT, a depth-resolved photothermal imaging modality, was employed to enable three-dimensional (3D) visualization of cladding layer defects. The comparison experiment between PP-DTT and classical dynamic thermal tomography was investigated. The results showed that PP-DTT can reconstruct the 3D topography of defects in a high quality.

  17. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  18. Visual Attention and Perception in Three-Dimensional Space

    DTIC Science & Technology

    1992-01-01

    Hughes & Zimba , 1987). Themarrows’ in the near-far condition (Fig. 1c) were actually wedges that pointed either toward or away from the subject, with their...1992). The increase in saccade latencies in the lower visual field. Perception and Psychophysics (in press). Hughes, H. C., & Zimba , L D. (1987

  19. Development of three-dimensional memory (3D-M)

    NASA Astrophysics Data System (ADS)

    Yu, Hong-Yu; Shen, Chen; Jiang, Lingli; Dong, Bin; Zhang, Guobiao

    2016-10-01

    Since the invention of 3-D ROM in 1996, three-dimensional memory (3D-M) has been under development for nearly two decades. In this presentation, we'll review the 3D-M history and compare different 3D-Ms (including 3D-OTP from Matrix Semiconductor, 3D-NAND from Samsung and 3D-XPoint from Intel/Micron).

  20. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Three-dimensional Magnetic Resonance Imaging of the Anterolateral Ligament of the Knee: An Evaluation of Intact and Anterior Cruciate Ligament-Deficient Knees From the Scientific Anterior Cruciate Ligament Network International (SANTI) Study Group.

    PubMed

    Muramatsu, Koichi; Saithna, Adnan; Watanabe, Hiroki; Sasaki, Kana; Yokosawa, Kenta; Hachiya, Yudo; Banno, Tatsuo; Helito, Camilo Partezani; Sonnery-Cottet, Bertrand

    2018-05-02

    To determine the visualization rate of the anterolateral ligament (ALL) in uninjured and anterior cruciate ligament (ACL)-deficient knees using 3-dimensional (3D) magnetic resonance imaging (MRI) and to characterize the spectrum of ALL injury observed in ACL-deficient knees, as well as determine the interobserver and intraobserver reliability of a 3D MRI classification of ALL injury. A total of 100 knees (60 ACL deficient and 40 uninjured) underwent 3D MRI. The ALL was evaluated by 2 blinded orthopaedic surgeons. The ALL was classified as follows: type A, continuous, clearly defined low-signal band; type B, warping, thinning, or iso-signal changes; and type C, without clear continuity. The comparison between imaging performed early after ACL injury (<1 month) and delayed imaging (>1 month) was evaluated, as was intraobserver and interobserver reliability. Complete visualization of the ALL was achieved in all uninjured knees. In the ACL-deficient group, 24 knees underwent early imaging, with 87.5% showing evidence of ALL injury (3 normal, or type A, knees [12.5%], 18 type B [75.0%], and 3 type C [12.5%]). The remaining 36 knees underwent delayed imaging, with 55.6% showing evidence of injury (16 type A [44.4%], 18 type B [50.0%], and 2 type C [5.6%]). The difference in the rate of injury between the 2 groups was significant (P = .03). Multivariate analysis showed that the delay from ACL injury to MRI was the only factor (negatively) associated with the rate of injury to the ALL. Interobserver reliability and intraobserver reliability of the classification of ALL type were good (κ = 0.86 and κ = 0.93, respectively). Three-dimensional MRI allows full visualization of the ALL in all normal knees. The rate of injury to the ALL in acutely ACL-injured knees identified on 3D MRI is higher than previous reports using standard MRI techniques. This rate is significantly higher than the rate of injury to the ALL identified on delayed imaging of ACL-injured knees. Level IV, diagnostic, case-control study. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  2. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  3. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  4. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.

  5. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246

  6. A Novel Use of Three-dimensional High-frequency Ultrasonography for Early Pregnancy Characterization in the Mouse.

    PubMed

    Peavey, Mary C; Reynolds, Corey L; Szwarc, Maria M; Gibbons, William E; Valdes, Cecilia T; DeMayo, Francesco J; Lydon, John P

    2017-10-24

    High-frequency ultrasonography (HFUS) is a common method to non-invasively monitor the real-time development of the human fetus in utero. The mouse is routinely used as an in vivo model to study embryo implantation and pregnancy progression. Unfortunately, such murine studies require pregnancy interruption to enable follow-up phenotypic analysis. To address this issue, we used three-dimensional (3-D) reconstruction of HFUS imaging data for early detection and characterization of murine embryo implantation sites and their individual developmental progression in utero. Combining HFUS imaging with 3-D reconstruction and modeling, we were able to accurately quantify embryo implantation site number as well as monitor developmental progression in pregnant C57BL6J/129S mice from 5.5 days post coitus (d.p.c.) through to 9.5 d.p.c. with the use of a transducer. Measurements included: number, location, and volume of implantation sites as well as inter-implantation site spacing; embryo viability was assessed by cardiac activity monitoring. In the immediate post-implantation period (5.5 to 8.5 d.p.c.), 3-D reconstruction of the gravid uterus in both mesh and solid overlay format enabled visual representation of the developing pregnancies within each uterine horn. As genetically engineered mice continue to be used to characterize female reproductive phenotypes derived from uterine dysfunction, this method offers a new approach to detect, quantify, and characterize early implantation events in vivo. This novel use of 3-D HFUS imaging demonstrates the ability to successfully detect, visualize, and characterize embryo-implantation sites during early murine pregnancy in a non-invasive manner. The technology offers a significant improvement over current methods, which rely on the interruption of pregnancies for gross tissue and histopathologic characterization. Here we use a video and text format to describe how to successfully perform ultrasounds of early murine pregnancy to generate reliable and reproducible data with reconstruction of the uterine form in mesh and solid 3-D images.

  7. Cortical dynamics of three-dimensional figure-ground perception of two-dimensional pictures.

    PubMed

    Grossberg, S

    1997-07-01

    This article develops the FACADE theory of 3-dimensional (3-D) vision and figure-ground separation to explain data concerning how 2-dimensional pictures give rise to 3-D percepts of occluding and occluded objects. The model describes how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representation that subserve conscious percepts. Spatially long-range cooperation and spatially short-range competition work together to separate the boundaries of occluding figures from their occluded neighbors. This boundary ownership process is sensitive to image T junctions at which occluded figures contact occluding figures. These boundaries control the filling-in of color within multiple depth-sensitive surface representations. Feedback between surface and boundary representations strengthens consistent boundaries while inhibiting inconsistent ones. Both the boundary and the surface representations of occluded objects may be amodally completed, while the surface representations of unoccluded objects become visible through modal completion. Functional roles for conscious modal and amodal representations in object recognition, spatial attention, and reaching behaviors are discussed. Model interactions are interpreted in terms of visual, temporal, and parietal cortices.

  8. Direct cortical control of 3D neuroprosthetic devices.

    PubMed

    Taylor, Dawn M; Tillery, Stephen I Helms; Schwartz, Andrew B

    2002-06-07

    Three-dimensional (3D) movement of neuroprosthetic devices can be controlled by the activity of cortical neurons when appropriate algorithms are used to decode intended movement in real time. Previous studies assumed that neurons maintain fixed tuning properties, and the studies used subjects who were unaware of the movements predicted by their recorded units. In this study, subjects had real-time visual feedback of their brain-controlled trajectories. Cell tuning properties changed when used for brain-controlled movements. By using control algorithms that track these changes, subjects made long sequences of 3D movements using far fewer cortical units than expected. Daily practice improved movement accuracy and the directional tuning of these units.

  9. Three-dimensional organotypic culture: experimental models of mammalian biology and disease.

    PubMed

    Shamir, Eliah R; Ewald, Andrew J

    2014-10-01

    Mammalian organs are challenging to study as they are fairly inaccessible to experimental manipulation and optical observation. Recent advances in three-dimensional (3D) culture techniques, coupled with the ability to independently manipulate genetic and microenvironmental factors, have enabled the real-time study of mammalian tissues. These systems have been used to visualize the cellular basis of epithelial morphogenesis, to test the roles of specific genes in regulating cell behaviours within epithelial tissues and to elucidate the contribution of microenvironmental factors to normal and disease processes. Collectively, these novel models can be used to answer fundamental biological questions and generate replacement human tissues, and they enable testing of novel therapeutic approaches, often using patient-derived cells.

  10. Play dough as an educational tool for visualization of complicated cerebral aneurysm anatomy.

    PubMed

    Eftekhar, Behzad; Ghodsi, Mohammad; Ketabchi, Ebrahim; Ghazvini, Arman Rakan

    2005-05-10

    Imagination of the three-dimensional (3D) structure of cerebral vascular lesions using two-dimensional (2D) angiograms is one of the skills that neurosurgical residents should achieve during their training. Although ongoing progress in computer software and digital imaging systems has facilitated viewing and interpretation of cerebral angiograms enormously, these facilities are not always available. We have presented the use of play dough as an adjunct to the teaching armamentarium for training in visualization of cerebral aneurysms in some cases. The advantages of play dough are low cost, availability and simplicity of use, being more efficient and realistic in training the less experienced resident in comparison with the simple drawings and even angiographic views from different angles without the need for computers and similar equipment. The disadvantages include the psychological resistance of residents to the use of something in surgical training that usually is considered to be a toy, and not being as clean as drawings or computerized images. Although technology and computerized software using the patients' own imaging data seems likely to become more advanced in the future, use of play dough in some complicated cerebral aneurysm cases may be helpful in 3D reconstruction of the real situation.

  11. Presentation Extensions of the SOAP

    NASA Technical Reports Server (NTRS)

    Carnright, Robert; Stodden, David; Coggi, John

    2009-01-01

    A set of extensions of the Satellite Orbit Analysis Program (SOAP) enables simultaneous and/or sequential presentation of information from multiple sources. SOAP is used in the aerospace community as a means of collaborative visualization and analysis of data on planned spacecraft missions. The following definitions of terms also describe the display modalities of SOAP as now extended: In SOAP terminology, View signifies an animated three-dimensional (3D) scene, two-dimensional still image, plot of numerical data, or any other visible display derived from a computational simulation or other data source; a) "Viewport" signifies a rectangular portion of a computer-display window containing a view; b) "Palette" signifies a collection of one or more viewports configured for simultaneous (split-screen) display in the same window; c) "Slide" signifies a palette with a beginning and ending time and an animation time step; and d) "Presentation" signifies a prescribed sequence of slides. For example, multiple 3D views from different locations can be crafted for simultaneous display and combined with numerical plots and other representations of data for both qualitative and quantitative analysis. The resulting sets of views can be temporally sequenced to convey visual impressions of a sequence of events for a planned mission.

  12. Human embryonic growth and development of the cerebellum using 3-dimensional ultrasound and virtual reality.

    PubMed

    Rousian, M; Groenenberg, I A L; Hop, W C; Koning, A H J; van der Spek, P J; Exalto, N; Steegers, E A P

    2013-08-01

    The aim of our study was to evaluate the first trimester cerebellar growth and development using 2 different measuring techniques: 3-dimensional (3D) and virtual reality (VR) ultrasound visualization. The cerebellum measurements were related to gestational age (GA) and crown-rump length (CRL). Finally, the reproducibility of both the methods was tested. In a prospective cohort study, we collected 630 first trimester, serially obtained, 3D ultrasound scans of 112 uncomplicated pregnancies between 7 + 0 and 12 + 6 weeks of GA. Only scans with high-quality images of the fossa posterior were selected for the analysis. Measurements were performed offline in the coronal plane using 3D (4D view) and VR (V-Scope) software. The VR enables the observer to use all available dimensions in a data set by visualizing the volume as a "hologram." Total cerebellar diameter, left, and right hemispheric diameter, and thickness were measured using both the techniques. All measurements were performed 3 times and means were used in repeated measurements analysis. After exclusion criteria were applied 177 (28%) 3D data sets were available for further analysis. The median GA was 10 + 0 weeks and the median CRL was 31.4 mm (range: 5.2-79.0 mm). The cerebellar parameters could be measured from 7 gestational weeks onward. The total cerebellar diameter increased from 2.2 mm at 7 weeks of GA to 13.9 mm at 12 weeks of GA using VR and from 2.2 to 13.8 mm using 3D ultrasound. The reproducibility, established in a subset of 35 data sets, resulted in intraclass correlation coefficient values ≥0.98. It can be concluded that cerebellar measurements performed by the 2 methods proved to be reproducible and comparable with each other. However, VR-using all three dimensions-provides a superior method for the visualization of the cerebellum. The constructed reference values can be used to study normal and abnormal cerebellar growth and development.

  13. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  14. Three-dimensional curvilinear device reconstruction from two fluoroscopic views

    NASA Astrophysics Data System (ADS)

    Delmas, Charlotte; Berger, Marie-Odile; Kerrien, Erwan; Riddell, Cyril; Trousset, Yves; Anxionnat, René; Bracard, Serge

    2015-03-01

    In interventional radiology, navigating devices under the sole guidance of fluoroscopic images inside a complex architecture of tortuous and narrow vessels like the cerebral vascular tree is a difficult task. Visualizing the device in 3D could facilitate this navigation. For curvilinear devices such as guide-wires and catheters, a 3D reconstruction may be achieved using two simultaneous fluoroscopic views, as available on a biplane acquisition system. The purpose of this paper is to present a new automatic three-dimensional curve reconstruction method that has the potential to reconstruct complex 3D curves and does not require a perfect segmentation of the endovascular device. Using epipolar geometry, our algorithm translates the point correspondence problem into a segment correspondence problem. Candidate 3D curves can be formed and evaluated independently after identifying all possible combinations of compatible 3D segments. Correspondence is then inherently solved by looking in 3D space for the most coherent curve in terms of continuity and curvature. This problem can be cast into a graph problem where the most coherent curve corresponds to the shortest path of a weighted graph. We present quantitative results of curve reconstructions performed from numerically simulated projections of tortuous 3D curves extracted from cerebral vascular trees affected with brain arteriovenous malformations as well as fluoroscopic image pairs of a guide-wire from both phantom and clinical sets. Our method was able to select the correct 3D segments in 97.5% of simulated cases thus demonstrating its ability to handle complex 3D curves and can deal with imperfect 2D segmentation.

  15. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  16. Development of a 3-D X-ray system

    NASA Astrophysics Data System (ADS)

    Evans, James Paul Owain

    The interpretation of standard two-dimensional x-ray images by humans is often very difficult. This is due to the lack of visual cues to depth in an image which has been produced by transmitted radiation. The solution put forward in this research is to introduce binocular parallax, a powerful physiological depth cue, into the resultant shadowgraph x-ray image. This has been achieved by developing a binocular stereoscopic x-ray imaging technique, which can be used for both visual inspection by human observers and also for the extraction of three-dimensional co-ordinate information. The technique is implemented in the design and development of two experimental x-ray systems and also the development of measurement algorithms. The first experimental machine is based on standard linear x-ray detector arrays and was designed as an optimum configuration for visual inspection by human observers. However, it was felt that a combination of the 3-D visual inspection capability together with a measurement facility would enhance the usefulness of the technique. Therefore, both a theoretical and an empirical analysis of the co-ordinate measurement capability of the machine has been carried out. The measurement is based on close-range photogrammetric techniques. The accuracy of the measurement has been found to be of the order of 4mm in x, 3mm in y and 6mm in z. A second experimental machine was developed and based on the same technique as that used for the first machine. However, a major departure has been the introduction of a dual energy linear x-ray detector array which will allow, in general, the discrimination between organic and inorganic substances. The second design is a compromise between ease of visual inspection for human observers and optimum three-dimensional co-ordinate measurement capability. The system is part of an on going research programme into the possibility of introducing psychological depth cues into the resultant x-ray images. The research presented in this thesis was initiated to enhance the visual interpretation of complex x-ray images, specifically in response to problems encountered in the routine screening of freight by HM. Customs and Excise. This phase of the work culminated in the development of the first experimental machine. During this work the security industry was starting to adopt a new type of x-ray detector, namely the dual energy x-ray sensor. The Department of Transport made available funding to the Police Scientific Development Branch (P.S.D.B.), part of The Home Office Science and Technology Group, to investigate the possibility of utilising the dual energy sensor in a 3-D x-ray screening system. This phase of the work culminated in the development of the second experimental machine.

  17. [The present status and future prospects of application of digital medical technology in general surgery in China].

    PubMed

    Fang, C H; LauWan, Y Y; Cai, W

    2017-01-01

    It has been almost 10 years since digital medical technology has started to becommonly used in general surgery in China.Led by advances in three dimensional(3D) visualization technology, virtual reality, simulation surgery, and 3D printing, digital medical technology have played important roles in changing the current practice of general surgery in China to become more effective by improving diagnostic accuracy and a better choice of therapeutic procedure with a resultant increased surgical success rate and a decreased surgical risks.Furthermore, education of medical students and young doctors become better and easier.

  18. 3D printing of preclinical X-ray computed tomographic data sets.

    PubMed

    Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew

    2013-03-22

    Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.

  19. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer

    PubMed Central

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources. PMID:18974802

  20. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer.

    PubMed

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.

  1. Three dimensional display of underground water-supplying network by combining VTK with SiCAD/open GIS system

    NASA Astrophysics Data System (ADS)

    Chen, Shaobin; Zhang, Xubo; Wang, Wenyuan; Zhou, Chengping; Ding, Mingyue

    2007-11-01

    Nowadays many Geographic Information System (GIS) have been widely used in many municipal corporations. Water-supplying corporations in many cities developed GIS application system based on SiCAD/Open GIS platform several years ago for their daily management and engineering construction. With the increasing of commercial business, many corporations now need to add the functionality of three dimensional to display to their GIS System without too much financial cost. Because of the expensiveness of updating SiCAD/Open GIS system to the up-to-date version, the introduction of a third-part 3D display technology is considered. In our solution, Visualization Toolkit (VTK) is used to achieve three dimensional display of underground water-supplying network on the basis of an existing SiCAD/Open GIS system. This paper addresses on the system architecture and key implementation technologies of this solution.

  2. 3D fluorescence anisotropy imaging using selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-08-24

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein.

  3. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  4. Computations underlying the visuomotor transformation for smooth pursuit eye movements

    PubMed Central

    Murdison, T. Scott; Leclercq, Guillaume; Lefèvre, Philippe

    2014-01-01

    Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit. PMID:25475344

  5. Registering 2D and 3D imaging data of bone during healing.

    PubMed

    Hoerth, Rebecca M; Baum, Daniel; Knötel, David; Prohaska, Steffen; Willie, Bettina M; Duda, Georg N; Hege, Hans-Christian; Fratzl, Peter; Wagermaier, Wolfgang

    2015-04-01

    PURPOSE/AIMS OF THE STUDY: Bone's hierarchical structure can be visualized using a variety of methods. Many techniques, such as light and electron microscopy generate two-dimensional (2D) images, while micro-computed tomography (µCT) allows a direct representation of the three-dimensional (3D) structure. In addition, different methods provide complementary structural information, such as the arrangement of organic or inorganic compounds. The overall aim of the present study is to answer bone research questions by linking information of different 2D and 3D imaging techniques. A great challenge in combining different methods arises from the fact that they usually reflect different characteristics of the real structure. We investigated bone during healing by means of µCT and a couple of 2D methods. Backscattered electron images were used to qualitatively evaluate the tissue's calcium content and served as a position map for other experimental data. Nanoindentation and X-ray scattering experiments were performed to visualize mechanical and structural properties. We present an approach for the registration of 2D data in a 3D µCT reference frame, where scanning electron microscopies serve as a methodic link. Backscattered electron images are perfectly suited for registration into µCT reference frames, since both show structures based on the same physical principles. We introduce specific registration tools that have been developed to perform the registration process in a semi-automatic way. By applying this routine, we were able to exactly locate structural information (e.g. mineral particle properties) in the 3D bone volume. In bone healing studies this will help to better understand basic formation, remodeling and mineralization processes.

  6. A 3D analysis of spatial relationship between geological structure and groundwater profile around Kobe City, Japan: based on ARCGIS 3D Analyst.

    NASA Astrophysics Data System (ADS)

    Shibahara, A.; Tsukamoto, H.; Kazahaya, K.; Morikawa, N.; Takahashi, M.; Takahashi, H.; Yasuhara, M.; Ohwada, M.; Oyama, Y.; Inamura, A.; Handa, H.; Nakama, J.

    2008-12-01

    Kobe city is located on the northern side of Osaka sedimentary basin, Japan, containing 1,000-2,000 m thick Quaternary sediments. After the Hanshin-Awaji Earthquake (January 17, 1995), a number of geological and geophysical surveys were conducted in this region. Then high-temperature anomaly of groundwater accompanied with high Cl concentration was detected along fault systems in this area. In addition, dissolved He in groundwater showed nearly upper mantle-like 3He/4He ratio, although there were no Quaternary volcanic activities in this region. Some recent studies have assumed that these groundwater profiles are related with geological structure because some faults and joints can function as pathways for groundwater flow, and mantle-derived water can upwell through the fault system to the ground surface. To verify these hypotheses, we established 3D geological and hydrological model around Osaka sedimentary basin. Our primary goal is to analyze spatial relationship between geological structure and groundwater profile. In the study region, a number of geological and hydrological datasets, such as boring log data, seismic profiling data, groundwater chemical profile, were reported. We converted these datasets to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. Furthermore, we projected seismic profiling data into three dimensional space and calculated distance between faults and sampling points, using Visual Basic for Applications (VBA) scripts. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer. This research project has been conducted under the research contract with the Japan Nuclear Energy Safety Organization (JNES).

  7. Burning invariant manifolds for reaction fronts in three-dimensional fluid flows

    NASA Astrophysics Data System (ADS)

    Mitchell, Kevin; Solomon, Tom

    2017-11-01

    The geometry of reaction fronts that propagate in fully three-dimensional (3D) fluid flows is studied using the tools of dynamical systems theory. The evolution of an infinitesimal front element is modeled as a six-dimensional ODE-three dimensions for the position of the front element and three for the orientation of its unit normal. This generalizes an earlier approach to understanding front propagation in two-dimensional (2D) fluid flows. As in 2D, the 3D system exhibits prominent burning invariant manifolds (BIMs). In 3D, BIMs are two-dimensional dynamically defined surfaces that form one-way barriers to the propagation of reaction fronts within the fluid. Due to the third dimension, BIMs in 3D exhibit a richer topology than their cousins in 2D. In particular, whereas BIMs in both 2D and 3D can originate from fixed points of the dynamics, BIMs in 3D can also originate from limit cycles. Such BIMs form robust tube-like channels that guide and constrain the evolution of the front within the bulk of the fluid. Supported by NSF Grant CMMI-1201236.

  8. Building the 3D Geological Model of Wall Rock of Salt Caverns Based on Integration Method of Multi-source data

    NASA Astrophysics Data System (ADS)

    Yongzhi, WANG; hui, WANG; Lixia, LIAO; Dongsen, LI

    2017-02-01

    In order to analyse the geological characteristics of salt rock and stability of salt caverns, rough three-dimensional (3D) models of salt rock stratum and the 3D models of salt caverns on study areas are built by 3D GIS spatial modeling technique. During implementing, multi-source data, such as basic geographic data, DEM, geological plane map, geological section map, engineering geological data, and sonar data are used. In this study, the 3D spatial analyzing and calculation methods, such as 3D GIS intersection detection method in three-dimensional space, Boolean operations between three-dimensional space entities, three-dimensional space grid discretization, are used to build 3D models on wall rock of salt caverns. Our methods can provide effective calculation models for numerical simulation and analysis of the creep characteristics of wall rock in salt caverns.

  9. A Comparison of Four-Image Reconstruction Algorithms for 3-D PET Imaging of MDAPET Camera Using Phantom Data

    NASA Astrophysics Data System (ADS)

    Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok

    2004-10-01

    We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.

  10. Three-dimensional image technology in forensic anthropology: Assessing the validity of biological profiles derived from CT-3D images of the skeleton

    NASA Astrophysics Data System (ADS)

    Garcia de Leon Valenzuela, Maria Julia

    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility of digitizing skeletal collections in order to expand their use and for deriving skeletal collections from living populations and creating population-specific standards. Further research for the development of a standard scanning and processing protocol is needed before 3D methods in forensic anthropology are considered as reliable tools for generating biological profiles.

  11. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.

  12. Introducing 3-Dimensional Printing of a Human Anatomic Pathology Specimen: Potential Benefits for Undergraduate and Postgraduate Education and Anatomic Pathology Practice.

    PubMed

    Mahmoud, Amr; Bennett, Michael

    2015-08-01

    Three-dimensional (3D) printing, a rapidly advancing technology, is widely applied in fields such as mechanical engineering and architecture. Three-dimensional printing has been introduced recently into medical practice in areas such as reconstructive surgery, as well as in clinical research. Three-dimensionally printed models of anatomic and autopsy pathology specimens can be used for demonstrating pathology entities to undergraduate medical, dental, and biomedical students, as well as for postgraduate training in examination of gross specimens for anatomic pathology residents and pathology assistants, aiding clinicopathological correlation at multidisciplinary team meetings, and guiding reconstructive surgical procedures. To apply 3D printing in anatomic pathology for teaching, training, and clinical correlation purposes. Multicolored 3D printing of human anatomic pathology specimens was achieved using a ZCorp 510 3D printer (3D Systems, Rock Hill, South Carolina) following creation of a 3D model using Autodesk 123D Catch software (Autodesk, Inc, San Francisco, California). Three-dimensionally printed models of anatomic pathology specimens created included pancreatoduodenectomy (Whipple operation) and radical nephrectomy specimens. The models accurately depicted the topographic anatomy of selected specimens and illustrated the anatomic relation of excised lesions to adjacent normal tissues. Three-dimensional printing of human anatomic pathology specimens is achievable. Advances in 3D printing technology may further improve the quality of 3D printable anatomic pathology specimens.

  13. 3D Printing of Plant Golgi Stacks from Their Electron Tomographic Models.

    PubMed

    Mai, Keith Ka Ki; Kang, Madison J; Kang, Byung-Ho

    2017-01-01

    Three-dimensional (3D) printing is an effective tool for preparing tangible 3D models from computer visualizations to assist in scientific research and education. With the recent popularization of 3D printing processes, it is now possible for individual laboratories to convert their scientific data into a physical form suitable for presentation or teaching purposes. Electron tomography is an electron microscopy method by which 3D structures of subcellular organelles or macromolecular complexes are determined at nanometer-level resolutions. Electron tomography analyses have revealed the convoluted membrane architectures of Golgi stacks, chloroplasts, and mitochondria. But the intricacy of their 3D organizations is difficult to grasp from tomographic models illustrated on computer screens. Despite the rapid development of 3D printing technologies, production of organelle models based on experimental data with 3D printing has rarely been documented. In this chapter, we present a simple guide to creating 3D prints of electron tomographic models of plant Golgi stacks using the two most accessible 3D printing technologies.

  14. 3D-PDR: Three-dimensional photodissociation region code

    NASA Astrophysics Data System (ADS)

    Bisbas, T. G.; Bell, T. A.; Viti, S.; Yates, J.; Barlow, M. J.

    2018-03-01

    3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.

  15. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.

  16. Spatial Visualization in Introductory Geology Courses

    NASA Astrophysics Data System (ADS)

    Reynolds, S. J.

    2004-12-01

    Visualization is critical to solving most geologic problems, which involve events and processes across a broad range of space and time. Accordingly, spatial visualization is an essential part of undergraduate geology courses. In such courses, students learn to visualize three-dimensional topography from two-dimensional contour maps, to observe landscapes and extract clues about how that landscape formed, and to imagine the three-dimensional geometries of geologic structures and how these are expressed on the Earth's surface or on geologic maps. From such data, students reconstruct the geologic history of areas, trying to visualize the sequence of ancient events that formed a landscape. To understand the role of visualization in student learning, we developed numerous interactive QuickTime Virtual Reality animations to teach students the most important visualization skills and approaches. For topography, students can spin and tilt contour-draped, shaded-relief terrains, flood virtual landscapes with water, and slice into terrains to understand profiles. To explore 3D geometries of geologic structures, they interact with virtual blocks that can be spun, sliced into, faulted, and made partially transparent to reveal internal structures. They can tilt planes to see how they interact with topography, and spin and tilt geologic maps draped over digital topography. The GeoWall system allows students to see some of these materials in true stereo. We used various assessments to research the effectiveness of these materials and to document visualization strategies students use. Our research indicates that, compared to control groups, students using such materials improve more in their geologic visualization abilities and in their general visualization abilities as measured by a standard spatial visualization test. Also, females achieve greater gains, improving their general visualization abilities to the same level as males. Misconceptions that students carry obstruct learning, but are largely undocumented. Many students, for example, cannot visualize that the landscape in which rock layers were deposited was different than the landscape in which the rocks are exposed today, even in the Grand Canyon.

  17. Three-dimensional virtual navigation versus conventional image guidance: A randomized controlled trial.

    PubMed

    Dixon, Benjamin J; Chan, Harley; Daly, Michael J; Qiu, Jimmy; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C

    2016-07-01

    Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. N/A. Laryngoscope, 126:1510-1515, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  18. Introducing a Virtual Reality Experience in Anatomic Pathology Education.

    PubMed

    Madrigal, Emilio; Prajapati, Shyam; Hernandez-Prera, Juan C

    2016-10-01

    A proper examination of surgical specimens is fundamental in anatomic pathology (AP) education. However, the resources available to residents may not always be suitable for efficient skill acquisition. We propose a method to enhance AP education by introducing high-definition videos featuring methods for appropriate specimen handling, viewable on two-dimensional (2D) and stereoscopic three-dimensional (3D) platforms. A stereo camera system recorded the gross processing of commonly encountered specimens. Three edited videos, with instructional audio voiceovers, were experienced by nine junior residents in a crossover study to assess the effects of the exposure (2D vs 3D movie views) on self-reported physiologic symptoms. A questionnaire was used to analyze viewer acceptance. All surveyed residents found the videos beneficial in preparation to examine a new specimen type. Viewer data suggest an improvement in specimen handling confidence and knowledge and enthusiasm toward 3D technology. None of the participants encountered significant motion sickness. Our novel method provides the foundation to create a robust teaching library. AP is inherently a visual discipline, and by building on the strengths of traditional teaching methods, our dynamic approach allows viewers to appreciate the procedural actions involved in specimen processing. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Development of 3D ultrasound needle guidance for high-dose-rate interstitial brachytherapy of gynaecological cancers

    NASA Astrophysics Data System (ADS)

    Rodgers, J.; Tessier, D.; D'Souza, D.; Leung, E.; Hajdok, G.; Fenster, A.

    2016-04-01

    High-dose-rate (HDR) interstitial brachytherapy is often included in standard-of-care for gynaecological cancers. Needles are currently inserted through a perineal template without any standard real-time imaging modality to assist needle guidance, causing physicians to rely on pre-operative imaging, clinical examination, and experience. While two-dimensional (2D) ultrasound (US) is sometimes used for real-time guidance, visualization of needle placement and depth is difficult and subject to variability and inaccuracy in 2D images. The close proximity to critical organs, in particular the rectum and bladder, can lead to serious complications. We have developed a three-dimensional (3D) transrectal US system and are investigating its use for intra-operative visualization of needle positions used in HDR gynaecological brachytherapy. As a proof-of-concept, four patients were imaged with post-insertion 3D US and x-ray CT. Using software developed in our laboratory, manual rigid registration of the two modalities was performed based on the perineal template's vaginal cylinder. The needle tip and a second point along the needle path were identified for each needle visible in US. The difference between modalities in the needle trajectory and needle tip position was calculated for each identified needle. For the 60 needles placed, the mean trajectory difference was 3.23 +/- 1.65° across the 53 visible needle paths and the mean difference in needle tip position was 3.89 +/- 1.92 mm across the 48 visible needles tips. Based on the preliminary results, 3D transrectal US shows potential for the development of a 3D US-based needle guidance system for interstitial gynaecological brachytherapy.

  20. Flow Visualization of Three-Dimensionality Inside the 12 cc Penn State Pulsatile Pediatric Ventricular Assist Device

    PubMed Central

    Roszelle, Breigh N.; Deutsch, Steven; Manning, Keefe B.

    2010-01-01

    In order to aid the ongoing concern of limited organ availability for pediatric heart transplants, Penn State has continued development of a pulsatile Pediatric Ventricular Assist Device (PVAD). Initial studies of the PVAD observed an increase in thrombus formation due to differences in flow field physics when compared to adult sized devices, which included a higher degree of three-dimensionality. This unique flow field brings into question the use of 2D planar particle image velocimetry (PIV) as a flow visualization technique, however the small size and high curvature of the PVAD make other tools such as stereoscopic PIV impractical. In order to test the reliability of the 2D results, we perform a pseudo-3D PIV study using planes both parallel and normal to the diaphragm employing a mock circulatory loop containing a viscoelastic fluid that mimics 40% hematocrit blood. We find that while the third component of velocity is extremely helpful to a physical understanding of the flow, particularly of the diastolic jet and the development of a desired rotational pattern, the flow data taken parallel to the diaphragm is sufficient to describe the wall shear rates, a critical aspect to the study of thrombosis and design of such pumps. PMID:19936926

Top