Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study
ERIC Educational Resources Information Center
Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat
2013-01-01
Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…
ERIC Educational Resources Information Center
Dickey, Michele D.
2005-01-01
Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…
Three-dimensional compound comparison methods and their application in drug discovery.
Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke
2015-07-16
Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
Swennen, Gwen R J
2014-11-01
The purpose of this article is to evaluate the timing for three-dimensional (3D) virtual treatment planning of orthognathic surgery in the daily clinical routine. A total of 350 consecutive patients were included in this study. All patients were scanned following the standardized "Triple CBCT Scan Protocol" in centric relation. Integrated 3D virtual planning and actual surgery were performed by the same surgeon in all patients. Although clinically acceptable, still software improvements especially toward 3D virtual occlusal definition are mandatory to make 3D virtual planning of orthognathic surgery less time-consuming and more user-friendly to the clinician. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Immersive Virtual Environment Cave[TM] in Chemistry Education
ERIC Educational Resources Information Center
Limniou, M.; Roberts, D.; Papadopoulos, N.
2008-01-01
By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…
ERIC Educational Resources Information Center
Nussli, Natalie; Oh, Kevin
2014-01-01
The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…
ERIC Educational Resources Information Center
Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.
2012-01-01
A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…
Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.
Thali, Michael J; Braun, Marcel; Dirnhofer, Richard
2003-11-26
Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.
3D Virtual Reality Check: Learner Engagement and Constructivist Theory
ERIC Educational Resources Information Center
Bair, Richard A.
2013-01-01
The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
ERIC Educational Resources Information Center
Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel
2015-01-01
A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…
A Virtual Campus Based on Human Factor Engineering
ERIC Educational Resources Information Center
Yang, Yuting; Kang, Houliang
2014-01-01
Three Dimensional or 3D virtual reality has become increasingly popular in many areas, especially in building a digital campus. This paper introduces a virtual campus, which is based on a 3D model of The Tourism and Culture College of Yunnan University (TCYU). Production of the virtual campus was aided by Human Factor and Ergonomics (HF&E), an…
Exploring the User Experience of Three-Dimensional Virtual Learning Environments
ERIC Educational Resources Information Center
Shin, Dong-Hee; Biocca, Frank; Choo, Hyunseung
2013-01-01
This study examines the users' experiences with three-dimensional (3D) virtual environments to investigate the areas of development as a learning application. For the investigation, the modified technology acceptance model (TAM) is used with constructs from expectation-confirmation theory (ECT). Users' responses to questions about cognitive…
ERIC Educational Resources Information Center
Wang, Shwu-huey
2012-01-01
In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…
Joda, Tim; Brägger, Urs; Gallucci, German
2015-01-01
Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.
ERIC Educational Resources Information Center
Jiman, Juhanita
This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…
Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida
2013-12-01
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
Three-Dimensional Analysis and Surgical Planning in Craniomaxillofacial Surgery.
Steinbacher, Derek M
2015-12-01
Three-dimensional (3D) analysis and planning are powerful tools in craniofacial and reconstructive surgery. The elements include 1) analysis, 2) planning, 3) virtual surgery, 4) 3D printouts of guides or implants, and 5) verification of actual to planned results. The purpose of this article is to review different applications of 3D planning in craniomaxillofacial surgery. Case examples involving 3D analysis and planning were reviewed. Common threads pertaining to all types of reconstruction are highlighted and contrasted with unique aspects specific to new applications in craniomaxillofacial surgery. Six examples of 3D planning are described: 1) cranial reconstruction, 2) craniosynostosis, 3) midface advancement, 4) mandibular distraction, 5) mandibular reconstruction, and 6) orthognathic surgery. Planning in craniomaxillofacial surgery is useful and has applicability across different procedures and reconstructions. Three-dimensional planning and virtual surgery enhance efficiency, accuracy, creativity, and reproducibility in craniomaxillofacial surgery. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
[3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].
Kneist, W; Huber, T; Paschold, M; Lang, H
2016-06-01
The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.
Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement
ERIC Educational Resources Information Center
Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.
2013-01-01
We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…
ERIC Educational Resources Information Center
Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos
2014-01-01
Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…
ERIC Educational Resources Information Center
Keating, Thomas; Barnett, Michael; Barab, Sasha A.; Hay, Kenneth E.
2002-01-01
Describes the Virtual Solar System (VSS) course which is one of the first attempts to integrate three-dimensional (3-D) computer modeling as a central component of introductory undergraduate education. Assesses changes in student understanding of astronomy concepts as a result of participating in an experimental introductory astronomy course in…
Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H
2015-10-01
Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Web-based Three-dimensional Virtual Body Structures: W3D-VBS
Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex
2002-01-01
Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495
Web-based three-dimensional Virtual Body Structures: W3D-VBS.
Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex
2002-01-01
Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.
Mendez, Bernardino M; Chiodo, Michael V; Patel, Parit A
2015-07-01
Virtual surgical planning using three-dimensional (3D) printing technology has improved surgical efficiency and precision. A limitation to this technology is that production of 3D surgical models requires a third-party source, leading to increased costs (up to $4000) and prolonged assembly times (averaging 2-3 weeks). The purpose of this study is to evaluate the feasibility, cost, and production time of customized skull models created by an "in-office" 3D printer for craniofacial reconstruction. Two patients underwent craniofacial reconstruction with the assistance of "in-office" 3D printing technology. Three-dimensional skull models were created from a bioplastic filament with a 3D printer using computed tomography (CT) image data. The cost and production time for each model were measured. For both patients, a customized 3D surgical model was used preoperatively to plan split calvarial bone grafting and intraoperatively to more efficiently and precisely perform the craniofacial reconstruction. The average cost for surgical model production with the "in-office" 3D printer was $25 (cost of bioplastic materials used to create surgical model) and the average production time was 14 hours. Virtual surgical planning using "in office" 3D printing is feasible and allows for a more cost-effective and less time consuming method for creating surgical models and guides. By bringing 3D printing to the office setting, we hope to improve intraoperative efficiency, surgical precision, and overall cost for various types of craniofacial and reconstructive surgery.
ERIC Educational Resources Information Center
Pellas, Nikolaos; Kazanidis, Ioannis; Konstantinou, Nikolaos; Georgiou, Georgia
2017-01-01
The present literature review builds on the results of 50 research articles published from 2000 until 2016. All these studies have successfully accomplished various learning tasks in the domain of Science, Technology, Engineering, and Mathematics (STEM) education using three-dimensional (3-D) multi-user virtual worlds for Primary, Secondary and…
ERIC Educational Resources Information Center
Liu, Chang; Franklin, Teresa; Shelor, Roger; Ozercan, Sertac; Reuter, Jarrod; Ye, En; Moriarty, Scott
2011-01-01
Game-like three-dimensional (3D) virtual worlds have become popular venues for youth to explore and interact with friends. To bring vital financial literacy education to them in places they frequent, a multi-disciplinary team of computer scientists, educators, and financial experts developed a youth-oriented financial literacy education game in…
Magical Stories: Blending Virtual Reality and Artificial Intelligence.
ERIC Educational Resources Information Center
McLellan, Hilary
Artificial intelligence (AI) techniques and virtual reality (VR) make possible powerful interactive stories, and this paper focuses on examples of virtual characters in three dimensional (3-D) worlds. Waldern, a virtual reality game designer, has theorized about and implemented software design of virtual teammates and opponents that incorporate AI…
Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"
ERIC Educational Resources Information Center
Minocha, Shailey; Reeves, Ahmad John
2010-01-01
"Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or…
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing
2013-01-01
This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…
Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram
2006-04-01
The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.
Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)
2017-01-01
created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
ERIC Educational Resources Information Center
Ip, Horace H. S.; Lai, Candy Hoi-Yan; Wong, Simpson W. L.; Tsui, Jenny K. Y.; Li, Richard Chen; Lau, Kate Shuk-Ying; Chan, Dorothy F. Y.
2017-01-01
Previous research has illustrated the unique benefits of three-dimensional (3-D) Virtual Reality (VR) technology in Autism Spectrum Disorder (ASD) children. This study examined the use of 3-D VR technology as an assessment tool in ASD children, and further compared its use to two-dimensional (2-D) tasks. Additionally, we aimed to examine…
Three-dimensional printing in cardiology: Current applications and future challenges.
Luo, Hongxing; Meyer-Szary, Jarosław; Wang, Zhongmin; Sabiniewicz, Robert; Liu, Yuhao
2017-01-01
Three-dimensional (3D) printing has attracted a huge interest in recent years. Broadly speaking, it refers to the technology which converts a predesigned virtual model to a touchable object. In clinical medicine, it usually converts a series of two-dimensional medical images acquired through computed tomography, magnetic resonance imaging or 3D echocardiography into a physical model. Medical 3D printing consists of three main steps: image acquisition, virtual reconstruction and 3D manufacturing. It is a promising tool for preoperative evaluation, medical device design, hemodynamic simulation and medical education, it is also likely to reduce operative risk and increase operative success. However, the most relevant studies are case reports or series which are underpowered in testing its actual effect on patient outcomes. The decision of making a 3D cardiac model may seem arbitrary since it is mostly based on a cardiologist's perceived difficulty in performing an interventional procedure. A uniform consensus is urgently necessary to standardize the key steps of 3D printing from imaging acquisition to final production. In the future, more clinical trials of rigorous design are possible to further validate the effect of 3D printing on the treatment of cardiovascular diseases. (Cardiol J 2017; 24, 4: 436-444).
Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui
2014-07-11
Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.
ERIC Educational Resources Information Center
Roth, Jeremy A.; Wilson, Timothy D.; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated…
Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B
2012-07-01
Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.
Kim, Hwi; Hahn, Joonku; Lee, Byoungho
2009-04-13
Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.
Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations
ERIC Educational Resources Information Center
Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis
2015-01-01
Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…
Van Hemelen, Geert; Van Genechten, Maarten; Renier, Lieven; Desmedt, Maria; Verbruggen, Elric; Nadjmi, Nasser
2015-07-01
Throughout the history of computing, shortening the gap between the physical and digital world behind the screen has always been strived for. Recent advances in three-dimensional (3D) virtual surgery programs have reduced this gap significantly. Although 3D assisted surgery is now widely available for orthognathic surgery, one might still argue whether a 3D virtual planning approach is a better alternative to a conventional two-dimensional (2D) planning technique. The purpose of this study was to compare the accuracy of a traditional 2D technique and a 3D computer-aided prediction method. A double blind randomised prospective study was performed to compare the prediction accuracy of a traditional 2D planning technique versus a 3D computer-aided planning approach. The accuracy of the hard and soft tissue profile predictions using both planning methods was investigated. There was a statistically significant difference between 2D and 3D soft tissue planning (p < 0.05). The statistically significant difference found between 2D and 3D planning and the actual soft tissue outcome was not confirmed by a statistically significant difference between methods. The 3D planning approach provides more accurate soft tissue planning. However, the 2D orthognathic planning is comparable to 3D planning when it comes to hard tissue planning. This study provides relevant results for choosing between 3D and 2D planning in clinical practice. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera
NASA Astrophysics Data System (ADS)
Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.
2004-01-01
We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.
From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy
ERIC Educational Resources Information Center
Jang, Susan
2010-01-01
This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…
A Head in Virtual Reality: Development of A Dynamic Head and Neck Model
ERIC Educational Resources Information Center
Nguyen, Ngan; Wilson, Timothy D.
2009-01-01
Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…
Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning
ERIC Educational Resources Information Center
Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan
2009-01-01
Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…
Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment
ERIC Educational Resources Information Center
Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth
2009-01-01
This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…
3D Flow visualization in virtual reality
NASA Astrophysics Data System (ADS)
Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa
2017-11-01
By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.
Virtual Worlds? "Outlook Good"
ERIC Educational Resources Information Center
Kelton, AJ
2008-01-01
Many people believed that virtual worlds would end up like the eight-track audiotape: a memory of something no longer used (or useful). Yet today there are hundreds of higher education institutions represented in three-dimensional (3D) virtual worlds such as Active Worlds and Second Life. The movement toward the virtual realm as a viable teaching…
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
NASA Technical Reports Server (NTRS)
1998-01-01
Crystal River Engineering was originally featured in Spinoff 1992 with the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. The Convolvotron was developed for Ames' research on virtual acoustic displays. Crystal River is a now a subsidiary of Aureal Semiconductor, Inc. and they together develop and market the technology, which is a 3-D (three dimensional) audio technology known commercially today as Aureal 3D (A-3D). The technology has been incorporated into video games, surround sound systems, and sound cards.
Brian J. Williams; Bo Song; Chou Chiao-Ying; Thomas M. Williams; John Hom
2010-01-01
Three-dimensional (3D) visualization is a useful tool that depicts virtual forest landscapes on computer. Previous studies in visualization have required high end computer hardware and specialized technical skills. A virtual forest landscape can be used to show different effects of disturbances and management scenarios on a computer, which allows observation of forest...
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet
2018-05-01
The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools
ERIC Educational Resources Information Center
Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min
2010-01-01
The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…
Virtual Reality and Learning: Where Is the Pedagogy?
ERIC Educational Resources Information Center
Fowler, Chris
2015-01-01
The aim of this paper was to build upon Dalgarno and Lee's model or framework of learning in three-dimensional (3-D) virtual learning environments (VLEs) and to extend their road map for further research in this area. The enhanced model shares the common goal with Dalgarno and Lee of identifying the learning benefits from using 3-D VLEs. The…
Visualization of spatial-temporal data based on 3D virtual scene
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang
2009-10-01
The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.
Computational techniques to enable visualizing shapes of objects of extra spatial dimensions
NASA Astrophysics Data System (ADS)
Black, Don Vaughn, II
Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.
Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life
ERIC Educational Resources Information Center
Minocha, Shailey; Morse, David R.
2010-01-01
Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…
ERIC Educational Resources Information Center
Simsek, Irfan
2016-01-01
With this research, in Second Life environment which is a three dimensional online virtual world, it is aimed to reveal the effects of student attitudes toward mathematics courses and design activities which will enable the third grade students of secondary school (primary education seventh grade) to see the 3D objects in mathematics courses in a…
Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds
NASA Astrophysics Data System (ADS)
Minocha, Shailey; Reeves, Ahmad John
Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.
The benefits of 3D modelling and animation in medical teaching.
Vernon, Tim; Peckham, Daniel
2002-12-01
Three-dimensional models created using materials such as wax, bronze and ivory, have been used in the teaching of medicine for many centuries. Today, computer technology allows medical illustrators to create virtual three-dimensional medical models. This paper considers the benefits of using still and animated output from computer-generated models in the teaching of medicine, and examines how three-dimensional models are made.
Dynamic 3D echocardiography in virtual reality
van den Bosch, Annemien E; Koning, Anton HJ; Meijboom, Folkert J; McGhie, Jackie S; Simoons, Maarten L; van der Spek, Peter J; Bogers, Ad JJC
2005-01-01
Background This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. Methods Three-dimensional echocardiographic data sets from 2 normal subjects and from 4 patients with a mitral valve pathological condition were included in the study. The three-dimensional data sets were acquired with the Philips Sonos 7500 echo-system and transferred to the BARCO (Barco N.V., Kortrijk, Belgium) I-space. Ten independent observers assessed the 6 three-dimensional data sets with and without mitral valve pathology. After 10 minutes' instruction in the I-Space, all of the observers could use the virtual pointer that is necessary to create cut planes in the hologram. Results The 10 independent observers correctly assessed the normal and pathological mitral valve in the holograms (analysis time approximately 10 minutes). Conclusion this report shows that dynamic holographic imaging of three-dimensional echocardiographic data is feasible. However, the applicability and use-fullness of this technology in clinical practice is still limited. PMID:16375768
Teaching veterinary obstetrics using three-dimensional animation technology.
Scherzer, Jakob; Buchanan, M Flint; Moore, James N; White, Susan L
2010-01-01
In this three-year study, test scores for students taught veterinary obstetrics in a classroom setting with either traditional media (photographs, text, and two-dimensional graphical presentations) were compared with those for students taught by incorporating three-dimensional (3D) media (linear animations and interactive QuickTime Virtual Reality models) into the classroom lectures. Incorporation of the 3D animations and interactive models significantly increased students' scores on essay questions designed to assess their comprehension of the subject matter. This approach to education may help to better prepare students for dealing with obstetrical cases during their final clinical year and after graduation.
ERIC Educational Resources Information Center
Omale, Nicholas M.
2010-01-01
This exploratory case study examines how three media attributes in 3-D MUVEs--avatars, 3-D spaces and bubble dialogue boxes--affect interaction in an online problem-based learning (PBL) activity. The study participants were eleven undergraduate students enrolled in a 200-level, three-credit-hour technology integration course at a Midwestern…
ERIC Educational Resources Information Center
Hodis, Eran; Prilusky, Jaime, Sussman, Joel L.
2010-01-01
Protein structures are hard to represent on paper. They are large, complex, and three-dimensional (3D)--four-dimensional if conformational changes count! Unlike most of their substrates, which can easily be drawn out in full chemical formula, drawing every atom in a protein would usually be a mess. Simplifications like showing only the surface of…
Principles of three-dimensional printing and clinical applications within the abdomen and pelvis.
Bastawrous, Sarah; Wake, Nicole; Levin, Dmitry; Ripley, Beth
2018-04-04
Improvements in technology and reduction in costs have led to widespread interest in three-dimensional (3D) printing. 3D-printed anatomical models contribute to personalized medicine, surgical planning, and education across medical specialties, and these models are rapidly changing the landscape of clinical practice. A physical object that can be held in one's hands allows for significant advantages over standard two-dimensional (2D) or even 3D computer-based virtual models. Radiologists have the potential to play a significant role as consultants and educators across all specialties by providing 3D-printed models that enhance clinical care. This article reviews the basics of 3D printing, including how models are created from imaging data, clinical applications of 3D printing within the abdomen and pelvis, implications for education and training, limitations, and future directions.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
NASA Astrophysics Data System (ADS)
Ren, Yilong; Duan, Xitong; Wu, Lei; He, Jin; Xu, Wu
2017-06-01
With the development of the “VR+” era, the traditional virtual assembly system of power equipment has been unable to satisfy our growing needs. In this paper, based on the analysis of the traditional virtual assembly system of electric power equipment and the application of VR technology in the virtual assembly system of electric power equipment in our country, this paper puts forward the scheme of establishing the virtual assembly system of power equipment: At first, we should obtain the information of power equipment, then we should using OpenGL and multi texture technology to build 3D solid graphics library. After the completion of three-dimensional modeling, we can use the dynamic link library DLL package three-dimensional solid graphics generation program to realize the modularization of power equipment model library and power equipment model library generated hidden algorithm. After the establishment of 3D power equipment model database, we set up the virtual assembly system of 3D power equipment to separate the assembly operation of the power equipment from the space. At the same time, aiming at the deficiency of the traditional gesture recognition algorithm, we propose a gesture recognition algorithm based on improved PSO algorithm for BP neural network data glove. Finally, the virtual assembly system of power equipment can really achieve multi-channel interaction function.
Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet
2018-01-01
Background The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Methods Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. Results The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. Conclusion In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS. PMID:29581806
ERIC Educational Resources Information Center
Yeung, Yau-Yuen
2004-01-01
This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…
ERIC Educational Resources Information Center
Bakas, Christos; Mikropoulos, Tassos A.
2003-01-01
Explains the design and development of an educational virtual environment to support the teaching of planetary phenomena, particularly the movements of Earth and the sun, day and night cycle, and change of seasons. Uses an interactive, three-dimensional (3D) virtual environment. Initial results show that the majority of students enthused about…
L2 Immersion in 3D Virtual Worlds: The Next Thing to Being There?
ERIC Educational Resources Information Center
Paillat, Edith
2014-01-01
Second Life is one of the many three-dimensional virtual environments accessible through a computer and a fast broadband connection. Thousands of participants connect to this platform to interact virtually with the world, join international communities of practice and, for some, role play groups. Unlike online role play games however, Second Life…
Teaching 21st-Century Art Education in a "Virtual" Age: Art Cafe at Second Life
ERIC Educational Resources Information Center
Lu, Lilly
2010-01-01
The emerging three-dimensional (3D) virtual world (VW) technology offers great potential for teaching contemporary digital art and growing digital visual culture in 21st-century art education. Such online virtual worlds are built and conceptualized based on information visualization and visual metaphors. Recently, an increasing number of…
ERIC Educational Resources Information Center
Lawless-Reljic, Sabine Karine
2010-01-01
Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…
Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-06-01
To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.
Kiraly, Laszlo
2018-04-01
Three-dimensional (3D) modelling and printing methods greatly support advances in individualized medicine and surgery. In pediatric and congenital cardiac surgery, personalized imaging and 3D modelling presents with a range of advantages, e.g., better understanding of complex anatomy, interactivity and hands-on approach, possibility for preoperative surgical planning and virtual surgery, ability to assess expected results, and improved communication within the multidisciplinary team and with patients. 3D virtual and printed models often add important new anatomical findings and prompt alternative operative scenarios. For the lack of critical mass of evidence, controlled randomized trials, however, most of these general benefits remain anecdotal. For an individual surgical case-scenario, prior knowledge, preparedness and possibility of emulation are indispensable in raising patient-safety. It is advocated that added value of 3D printing in healthcare could be raised by establishment of a multidisciplinary centre of excellence (COE). Policymakers, research scientists, clinicians, as well as health care financers and local entrepreneurs should cooperate and communicate along a legal framework and established scientific guidelines for the clinical benefit of patients, and towards financial sustainability. It is expected that besides the proven utility of 3D printed patient-specific anatomical models, 3D printing will have a major role in pediatric and congenital cardiac surgery by providing individually customized implants and prostheses, especially in combination with evolving techniques of bioprinting.
2018-01-01
Three-dimensional (3D) modelling and printing methods greatly support advances in individualized medicine and surgery. In pediatric and congenital cardiac surgery, personalized imaging and 3D modelling presents with a range of advantages, e.g., better understanding of complex anatomy, interactivity and hands-on approach, possibility for preoperative surgical planning and virtual surgery, ability to assess expected results, and improved communication within the multidisciplinary team and with patients. 3D virtual and printed models often add important new anatomical findings and prompt alternative operative scenarios. For the lack of critical mass of evidence, controlled randomized trials, however, most of these general benefits remain anecdotal. For an individual surgical case-scenario, prior knowledge, preparedness and possibility of emulation are indispensable in raising patient-safety. It is advocated that added value of 3D printing in healthcare could be raised by establishment of a multidisciplinary centre of excellence (COE). Policymakers, research scientists, clinicians, as well as health care financers and local entrepreneurs should cooperate and communicate along a legal framework and established scientific guidelines for the clinical benefit of patients, and towards financial sustainability. It is expected that besides the proven utility of 3D printed patient-specific anatomical models, 3D printing will have a major role in pediatric and congenital cardiac surgery by providing individually customized implants and prostheses, especially in combination with evolving techniques of bioprinting. PMID:29770294
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
Evaluating the Usability of Pinchigator, a system for Navigating Virtual Worlds using Pinch Gloves
NASA Technical Reports Server (NTRS)
Hamilton, George S.; Brookman, Stephen; Dumas, Joseph D. II; Tilghman, Neal
2003-01-01
Appropriate design of two dimensional user interfaces (2D U/I) utilizing the well known WIMP (Window, Icon, Menu, Pointing device) environment for computer software is well studied and guidance can be found in several standards. Three-dimensional U/I design is not nearly so mature as 2D U/I, and standards bodies have not reached consensus on what makes a usable interface. This is especially true when the tools for interacting with the virtual environment may include stereo viewing, real time trackers and pinch gloves instead of just a mouse & keyboard. Over the last several years the authors have created a 3D U/I system dubbed Pinchigator for navigating virtual worlds based on the dVise dV/Mockup visualization software, Fakespace Pinch Gloves and Pohlemus trackers. The current work is to test the usability of the system on several virtual worlds, suggest improvements to increase Pinchigator s usability, and then to generalize about what was learned and how those lessons might be applied to improve other 3D U/I systems.
USDA-ARS?s Scientific Manuscript database
The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...
ERIC Educational Resources Information Center
Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.
2008-01-01
Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…
Development of an interactive anatomical three-dimensional eye model.
Allen, Lauren K; Bhattacharyya, Siddhartha; Wilson, Timothy D
2015-01-01
The discrete anatomy of the eye's intricate oculomotor system is conceptually difficult for novice students to grasp. This is problematic given that this group of muscles represents one of the most common sites of clinical intervention in the treatment of ocular motility disorders and other eye disorders. This project was designed to develop a digital, interactive, three-dimensional (3D) model of the muscles and cranial nerves of the oculomotor system. Development of the 3D model utilized data from the Visible Human Project (VHP) dataset that was refined using multiple forms of 3D software. The model was then paired with a virtual user interface in order to create a novel 3D learning tool for the human oculomotor system. Development of the virtual eye model was done while attempting to adhere to the principles of cognitive load theory (CLT) and the reduction of extraneous load in particular. The detailed approach, digital tools employed, and the CLT guidelines are described herein. © 2014 American Association of Anatomists.
NASA Astrophysics Data System (ADS)
Komosinski, Maciej; Ulatowski, Szymon
Life is one of the most complex phenomena known in our world. Researchers construct various models of life that serve diverse purposes and are applied in a wide range of areas — from medicine to entertainment. A part of artificial life research focuses on designing three-dimensional (3D) models of life-forms, which are obviously appealing to observers because the world we live in is three dimensional. Thus, we can easily understand behaviors demonstrated by virtual individuals, study behavioral changes during simulated evolution, analyze dependencies between groups of creatures, and so forth. However, 3D models of life-forms are not only attractive because of their resemblance to the real-world organisms. Simulating 3D agents has practical implications: If the simulation is accurate enough, then real robots can be built based on the simulation, as in [22]. Agents can be designed, tested, and optimized in a virtual environment, and the best ones can be constructed as real robots with embedded control systems. This way artificial intelligence algorithms can be “embodied” in the 3D mechanical constructs.
Techtalk: "Second Life" and Developmental Education
ERIC Educational Resources Information Center
Burgess, Melissa L.; Caverly, David C.
2009-01-01
In our previous two columns, we discussed the potential for using blogs and wikis with developmental education (DE) students. Another Web 2.0 technology, virtual environments like "Second Life", provides a virtual world where residents create avatars (three-dimensional [3-D] self-representations) and navigate around an online environment (Caverly,…
The Arts 3D VLE Metaverse as a Network of Imagination
ERIC Educational Resources Information Center
Rauch, Ulrich; Cohodas, Marvin; Wang, Tim
2009-01-01
Ulrich Rauch, Marvin Cohodas, and Tim Wang describe the Arts Metaverse, a Croquet-based virtual learning environment under development at the University of British Columbia. The Arts Metaverse allows three-dimensional virtual reconstruction of important artifacts and sites of classical, ancient, and indigenous American art, thereby allowing…
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2007-01-01
Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.
Virtual reality as a tool for improving spatial rotation among deaf and hard-of-hearing children.
Passig, D; Eden, S
2001-12-01
The aim of this study was to investigate whether the practice of rotating Virtual Reality (VR) three-dimensional (3D) objects will enhance the spatial rotation thinking of deaf and hard-of-hearing children compared to the practice of rotating two-dimensional (2D) objects. Two groups were involved in this study: an experimental group, which included 21 deaf and hardof-hearing children, who played a VR 3D game, and a control group of 23 deaf and hard-of-hearing children, who played a similar 2D (not VR) game. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved the children's performance of spatial rotation, which enhanced their ability to perform better in other intellectual skills as well as in their sign language skills.
Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.
Novel application of three-dimensional technologies in a case of dismemberment.
Baier, Waltraud; Norman, Danielle G; Warnett, Jason M; Payne, Mark; Harrison, Nigel P; Hunt, Nicholas C A; Burnett, Brian A; Williams, Mark A
2017-01-01
This case study reports the novel application of three-dimensional technologies such as micro-CT and 3D printing to the forensic investigation of a complex case of dismemberment. Micro-CT was successfully employed to virtually align severed skeletal elements found in different locations, analyse tool marks created during the dismemberment process, and virtually dissect a charred piece of evidence. High resolution 3D prints of the burnt human bone contained within were created for physical visualisation to assist the investigation team. Micro-CT as a forensic radiological method provided vital information and the basis for visualisation both during the investigation and in the subsequent trial making it one of the first examples of such technology in a UK court. Copyright © 2016. Published by Elsevier B.V.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Three-dimensional polarization algebra for all polarization sensitive optical systems.
Li, Yahong; Fu, Yuegang; Liu, Zhiying; Zhou, Jianhong; Bryanston-Cross, P J; Li, Yan; He, Wenjun
2018-05-28
Using three-dimensional (3D) coherency vector (9 × 1), we develop a new 3D polarization algebra to calculate the polarization properties of all polarization sensitive optical systems, especially when the incident optical field is partially polarized or un-polarized. The polarization properties of a high numerical aperture (NA) microscope objective (NA = 1.25 immersed in oil) are analyzed based on the proposed 3D polarization algebra. Correspondingly, the polarization simulation of this high NA optical system is performed by the commercial software VirtualLAB Fusion. By comparing the theoretical calculations with polarization simulations, a perfect matching relation is obtained, which demonstrates that this 3D polarization algebra is valid to quantify the 3D polarization properties for all polarization sensitive optical systems.
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
Virtual Solar System Project: Building Understanding through Model Building.
ERIC Educational Resources Information Center
Barab, Sasha A.; Hay, Kenneth E.; Barnett, Michael; Keating, Thomas
2000-01-01
Describes an introductory astronomy course for undergraduate students in which students use three-dimensional (3-D) modeling tools to model the solar system and develop rich understandings of astronomical phenomena. Indicates that 3-D modeling can be used effectively in regular undergraduate university courses as a tool to develop understandings…
From tissue to silicon to plastic: three-dimensional printing in comparative anatomy and physiology
Lauridsen, Henrik; Hansen, Kasper; Nørgård, Mathias Ørum; Wang, Tobias; Pedersen, Michael
2016-01-01
Comparative anatomy and physiology are disciplines related to structures and mechanisms in three-dimensional (3D) space. For the past centuries, scientific reports in these fields have relied on written descriptions and two-dimensional (2D) illustrations, but in recent years 3D virtual modelling has entered the scene. However, comprehending complex anatomical structures is hampered by reproduction on flat inherently 2D screens. One way to circumvent this problem is in the production of 3D-printed scale models. We have applied computed tomography and magnetic resonance imaging to produce digital models of animal anatomy well suited to be printed on low-cost 3D printers. In this communication, we report how to apply such technology in comparative anatomy and physiology to aid discovery, description, comprehension and communication, and we seek to inspire fellow researchers in these fields to embrace this emerging technology. PMID:27069653
Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.
Villarrubia, J S; Tondare, V N; Vladár, A E
2016-01-01
The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.
The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers
ERIC Educational Resources Information Center
Can, Tuncer; Simsek, Irfan
2015-01-01
The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…
ERIC Educational Resources Information Center
Pellas, Nikolaos
2014-01-01
Nowadays, the dissemination and exploitation of three-dimensional (3D) multi-user virtual worlds in higher education have been disclosed from their widespread acceptance as candidate learning platforms. However, it is still lacking a theoretical cybernetic macro-script to elaborate the coordination of multiple complex interactions among…
Making Web3D Less Scary: Toward Easy-to-Use Web3D e-Learning Content Development Tools for Educators
ERIC Educational Resources Information Center
de Byl, Penny
2009-01-01
Penny de Byl argues that one of the biggest challenges facing educators today is the integration of rich and immersive three-dimensional environments with existing teaching and learning materials. To empower educators with the ability to embrace emerging Web3D technologies, the Advanced Learning and Immersive Virtual Environment (ALIVE) research…
Three-dimensional (3D) printing and its applications for aortic diseases.
Hangge, Patrick; Pershad, Yash; Witting, Avery A; Albadawi, Hassan; Oklu, Rahmi
2018-04-01
Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases.
Dixon, Benjamin J; Chan, Harley; Daly, Michael J; Qiu, Jimmy; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C
2016-07-01
Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. N/A. Laryngoscope, 126:1510-1515, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Improving flexible thinking in deaf and hard of hearing children with virtual reality technology.
Passig, D; Eden, S
2000-07-01
The study investigated whether rotating three-dimensional (3-D) objects using virtual reality (VR) will affect flexible thinking in deaf and hard of hearing children. Deaf and hard of hearing subjects were distributed into experimental and control groups. The experimental group played virtual 3-D Tetris (a game using VR technology) individually, 15 minutes once weekly over 3 months. The control group played conventional two-dimensional (2-D) Tetris over the same period. Children with normal hearing participated as a second control group in order to establish whether deaf and hard of hearing children really are disadvantaged in flexible thinking. Before-and-after testing showed significantly improved flexible thinking in the experimental group; the deaf and hard of hearing control group showed no significant improvement. Also, before the experiment, the deaf and hard of hearing children scored lower in flexible thinking than the children with normal hearing. After the experiment, the difference between the experimental group and the control group of children with normal hearing was smaller.
ERIC Educational Resources Information Center
Gregory, Sue; Scutter, Sheila; Jacka, Lisa; McDonald, Marcus; Farley, Helen; Newman, Chris
2015-01-01
Three-dimensional (3D) virtual worlds have been used for more than a decade in higher education for teaching and learning. Since the 1980s, academics began using virtual worlds as an exciting and innovative new technology to provide their students with new learning experiences that were difficult to provide any other way. But since that time,…
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Im, Joon; Kang, Sang Hoon; Lee, Ji Yeon; Kim, Moon Key
2014-01-01
A 19-year-old woman presented to our dental clinic with anterior crossbite and mandibular prognathism. She had a concave profile, long face, and Angle Class III molar relationship. She showed disharmony in the crowding of the maxillomandibular dentition and midline deviation. The diagnosis and treatment plan were established by a three-dimensional (3D) virtual setup and 3D surgical simulation, and a surgical wafer was produced using the stereolithography technique. No presurgical orthodontic treatment was performed. Using the surgery-first approach, Le Fort I maxillary osteotomy and mandibular bilateral intraoral vertical ramus osteotomy setback were carried out. Treatment was completed with postorthodontic treatment. Thus, symmetrical and balanced facial soft tissue and facial form as well as stabilized and well-balanced occlusion were achieved. PMID:25473649
Improved Virtual Planning for Bimaxillary Orthognathic Surgery.
Hatamleh, Muhanad; Turner, Catherine; Bhamrah, Gurprit; Mack, Gavin; Osher, Jonas
2016-09-01
Conventional model surgery planning for bimaxillary orthognathic surgery can be laborious, time-consuming and may contain potential errors; hence three-dimensional (3D) virtual orthognathic planning has been proven to be an efficient, reliable, and cost-effective alternative. In this report, the 3D planning is described for a patient presenting with a Class III incisor relationship on a Skeletal III base with pan facial asymmetry complicated by reverse overjet and anterior open bite. A combined scan data of direct cone beam computer tomography and indirect dental scan were used in the planning. Additionally, a new method of establishing optimum intercuspation by scanning dental casts in final occlusion and positioning it to the composite-scans model was shown. Furthermore, conventional model surgery planning was carried out following in-house protocol. Intermediate and final intermaxillary splints were produced following the conventional method and 3D printing. Three-dimensional planning showed great accuracy and treatment outcome and reduced laboratory time in comparison with the conventional method. Establishing the final dental occlusion on casts and integrating it in final 3D planning enabled us to achieve the best possible intercuspation.
Hung, Chun-Chi; Li, Yuan-Ta; Chou, Yu-Ching; Chen, Jia-En; Wu, Chia-Chun; Shen, Hsain-Chung; Yeh, Tsu-Te
2018-05-03
Treating pelvic fractures remains a challenging task for orthopaedic surgeons. We aimed to evaluate the feasibility, accuracy, and effectiveness of three-dimensional (3D) printing technology and computer-assisted virtual surgery for pre-operative planning in anterior ring fractures of the pelvis. We hypothesized that using 3D printing models would reduce operation time and significantly improve the surgical outcomes of pelvic fracture repair. We retrospectively reviewed the records of 30 patients with pelvic fractures treated by anterior pelvic fixation with locking plates (14 patients, conventional locking plate fixation; 16 patients, pre-operative virtual simulation with 3D, printing-assisted, pre-contoured, locking plate fixation). We compared operative time, instrumentation time, blood loss, and post-surgical residual displacements, as evaluated on X-ray films, among groups. Statistical analyses evaluated significant differences between the groups for each of these variables. The patients treated with the virtual simulation and 3D printing-assisted technique had significantly shorter internal fixation times, shorter surgery duration, and less blood loss (- 57 minutes, - 70 minutes, and - 274 ml, respectively; P < 0.05) than patients in the conventional surgery group. However, the post-operative radiological result was similar between groups (P > 0.05). The complication rate was less in the 3D printing group (1/16 patients) than in the conventional surgery group (3/14 patients). The 3D simulation and printing technique is an effective and reliable method for treating anterior pelvic ring fractures. With precise pre-operative planning and accurate execution of the procedures, this time-saving approach can provide a more personalized treatment plan, allowing for a safer orthopaedic surgery.
Applicability of three-dimensional imaging techniques in fetal medicine*
Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson
2016-01-01
Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Psychophysical evaluation of three-dimensional auditory displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1991-01-01
Work during this reporting period included the completion of our research on the use of principal components analysis (PCA) to model the acoustical head related transfer functions (HRTFs) that are used to synthesize virtual sources for three dimensional auditory displays. In addition, a series of studies was initiated on the perceptual errors made by listeners when localizing free-field and virtual sources. Previous research has revealed that under certain conditions these perceptual errors, often called 'confusions' or 'reversals', are both large and frequent, thus seriously comprising the utility of a 3-D virtual auditory display. The long-range goal of our work in this area is to elucidate the sources of the confusions and to develop signal-processing strategies to reduce or eliminate them.
NASA Astrophysics Data System (ADS)
Damayanti, Latifah Adelina; Ikhsan, Jaslin
2017-05-01
Integration of information technology in education more rapidly performed in a medium of learning. Three-dimensional (3D) molecular modeling was performed in Augmented Reality as a tangible manifestation of increasingly modern technology utilization. Based on augmented reality, three-dimensional virtual object is projected in real time and the exact environment. This paper reviewed the uses of chemical learning supplement book of aldehydes and ketones which are equipped with three-dimensional molecular modeling by which students can inspect molecules from various viewpoints. To plays the 3D illustration printed on the book, smartphones with the open-source software of the technology based integrated Augmented Reality can be used. The aims of this research were to develop the monograph of aldehydes and ketones with 3 dimensional (3D) illustrations, to determine the specification of the monograph, and to determine the quality of the monograph. The quality of the monograph is evaluated by experiencing chemistry teachers on the five aspects of contents/materials, presentations, language and images, graphs, and software engineering, resulted in the result that the book has a very good quality to be used as a chemistry learning supplement book.
Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment.
Kockro, Ralf A; Amaxopoulou, Christina; Killeen, Tim; Wagner, Wolfgang; Reisch, Robert; Schwandt, Eike; Gutenberg, Angelika; Giese, Alf; Stofft, Eckart; Stadie, Axel T
2015-09-01
Three-dimensional (3D) computer graphics are increasingly used to supplement the teaching of anatomy. While most systems consist of a program which produces 3D renderings on a workstation with a standard screen, the Dextrobeam virtual reality VR environment allows the presentation of spatial neuroanatomical models to larger groups of students through a stereoscopic projection system. Second-year medical students (n=169) were randomly allocated to receive a standardised pre-recorded audio lecture detailing the anatomy of the third ventricle accompanied by either a two-dimensional (2D) PowerPoint presentation (n=80) or a 3D animated tour of the third ventricle with the DextroBeam. Students completed a 10-question multiple-choice exam based on the content learned and a subjective evaluation of the teaching method immediately after the lecture. Students in the 2D group achieved a mean score of 5.19 (±2.12) compared to 5.45 (±2.16) in the 3D group, with the results in the 3D group statistically non-inferior to those of the 2D group (p<0.0001). The students rated the 3D method superior to 2D teaching in four domains (spatial understanding, application in future anatomy classes, effectiveness, enjoyableness) (p<0.01). Stereoscopically enhanced 3D lectures are valid methods of imparting neuroanatomical knowledge and are well received by students. More research is required to define and develop the role of large-group VR systems in modern neuroanatomy curricula. Copyright © 2015 Elsevier GmbH. All rights reserved.
Galantucci, Luigi Maria; Percoco, Gianluca; Lavecchia, Fulvio; Di Gioia, Eliana
2013-05-01
The article describes a new methodology to scan and integrate facial soft tissue surface with dental hard tissue models in a three-dimensional (3D) virtual environment, for a novel diagnostic approach.The facial and the dental scans can be acquired using any optical scanning systems: the models are then aligned and integrated to obtain a full virtual navigable representation of the head of the patient. In this article, we report in detail and further implemented a method for integrating 3D digital cast models into a 3D facial image, to visualize the anatomic position of the dentition. This system uses several 3D technologies to scan and digitize, integrating them with traditional dentistry records. The acquisitions were mainly performed using photogrammetric scanners, suitable for clinics or hospitals, able to obtain high mesh resolution and optimal surface texture for the photorealistic rendering of the face. To increase the quality and the resolution of the photogrammetric scanning of the dental elements, the authors propose a new technique to enhance the texture of the dental surface. Three examples of the application of the proposed procedure are reported in this article, using first laser scanning and photogrammetry and then only photogrammetry. Using cheek retractors, it is possible to scan directly a great number of dental elements. The final results are good navigable 3D models that integrate facial soft tissue and dental hard tissues. The method is characterized by the complete absence of ionizing radiation, portability and simplicity, fast acquisition, easy alignment of the 3D models, and wide angle of view of the scanner. This method is completely noninvasive and can be repeated any time the physician needs new clinical records. The 3D virtual model is a precise representation both of the soft and the hard tissue scanned, and it is possible to make any dimensional measure directly in the virtual space, for a full integrated 3D anthropometry and cephalometry. Moreover, the authors propose a method completely based on close-range photogrammetric scanning, able to detect facial and dental surfaces, and reducing the time, the complexity, and the cost of the scanning operations and the numerical elaboration.
3D gaze tracking system for NVidia 3D Vision®.
Wibirama, Sunu; Hamamoto, Kazuhiko
2013-01-01
Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.
Woodbury, M A; Woodbury, M F
1998-01-01
Our 3-D Body Representation constructed during development by our Central Nervous System under the direction of our DNA, consists of a holographic representation arising from sensory input in the cerebellum and projected extraneurally in the brain ventricular fluid which has the chemical structure of liquid crystal. The structure of 3-D holographic Body Representation is then extrapolated by such cognitive instruments as boundarization, geometrization and gestalt organization upon the external environment which is perceived consequently as three dimensional. When the Body Representation collapses as in psychotic panic states. patients become terrified as they suddenly lose the perception of themselves and the world around them as three dimensional, solid in a reliably solid environment but feel suddenly that they are no longer a person but a disorganized blob. In our clinical practice we found serendipitously that the structure of three dimensionality can be restored even without medication by techniques involving stimulation of the body sensory system in the presence of a benevolent psychotherapist. Implications for Virtual Reality will be discussed.
Olszewski, R; Tranduy, K; Reychler, H
2010-07-01
The authors present a new procedure of computer-assisted genioplasty. They determined the anterior, posterior and inferior limits of the chin in relation to the skull and face with the newly developed and validated three-dimensional cephalometric planar analysis (ACRO 3D). Virtual planning of the osteotomy lines was carried out with Mimics (Materialize) software. The authors built a three-dimensional rapid-prototyping multi-position model of the chin area from a medical low-dose CT scan. The transfer of virtual information to the operating room consisted of two elements. First, the titanium plates on the 3D RP model were pre-bent. Second, a surgical guide for the transfer of the osteotomy lines and the positions of the screws to the operating room was manufactured. The authors present the first case of the use of this model on a patient. The postoperative results are promising, and the technique is fast and easy-to-use. More patients are needed for a definitive clinical validation of this procedure. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Three-dimensional (3D) printing and its applications for aortic diseases
Hangge, Patrick; Pershad, Yash; Witting, Avery A.; Albadawi, Hassan
2018-01-01
Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases. PMID:29850416
Infusion of a Gaming Paradigm into Computer-Aided Engineering Design Tools
2012-05-03
Virtual Test Bed (VTB), and the gaming tool, Unity3D . This hybrid gaming environment coupled a three-dimensional (3D) multibody vehicle system model...from Google Earth to the 3D visual front-end fabricated around Unity3D . The hybrid environment was sufficiently developed to support analyses of the...ndFr Cti3r4 G’OjrdFr ctior-2 The VTB simulation of the vehicle dynamics ran concurrently with and interacted with the gaming engine, Unity3D which
Chin, Shih-Jan; Wilde, Frank; Neuhaus, Michael; Schramm, Alexander; Gellrich, Nils-Claudius; Rana, Majeed
2017-12-01
The benefit of computer-assisted planning in orthognathic surgery has been extensively documented over the last decade. This study aims to evaluate the accuracy of a virtual orthognathic surgical plan by a novel three dimensional (3D) analysis method. Ten patients who required orthognathic surgery were included in this study. A virtual surgical plan was achieved by the combination of a 3D skull model acquired from computed tomography (CT) and surface scanning of the upper and lower dental arch respectively and final occlusal position. Osteotomies and movement of maxilla and mandible were simulated by Dolphin Imaging 11.8 Premium ® (Dolphin Imaging and Management Solutions, Chatsworth, CA). The surgical plan was transferred to surgical splints fabricated by means of Computer Aided Design/Computer Aided Manufacturing (CAD/CAM). Differences of three dimensional measurements between the virtual surgical plan and postoperative results were evaluated. The results from all parameters showed that the virtual surgical plans were successfully transferred by the assistance of CAD/CAM fabricated surgical splint. Wilcoxon's signed rank test showed that no statistically significant deviation between surgical plan and post-operational result could be detected. However, deviation of angle U1 axis-HP and distance of A-CP could not fulfill the clinical success criteria. Virtual surgical planning and CAD/CAM fabricated surgical splint are proven to facilitate treatment planning and offer an accurate surgical result in orthognathic surgery. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Codd, Anthony M; Choudhury, Bipasha
2011-01-01
The use of cadavers to teach anatomy is well established, but limitations with this approach have led to the introduction of alternative teaching methods. One such method is the use of three-dimensional virtual reality computer models. An interactive, three-dimensional computer model of human forearm anterior compartment musculoskeletal anatomy was produced using the open source 3D imaging program "Blender." The aim was to evaluate the use of 3D virtual reality when compared with traditional anatomy teaching methods. Three groups were identified from the University of Manchester second year Human Anatomy Research Skills Module class: a "control" group (no prior knowledge of forearm anatomy), a "traditional methods" group (taught using dissection and textbooks), and a "model" group (taught solely using e-resource). The groups were assessed on anatomy of the forearm by a ten question practical examination. ANOVA analysis showed the model group mean test score to be significantly higher than the control group (mean 7.25 vs. 1.46, P < 0.001) and not significantly different to the traditional methods group (mean 6.87, P > 0.5). Feedback from all users of the e-resource was positive. Virtual reality anatomy learning can be used to compliment traditional teaching methods effectively. Copyright © 2011 American Association of Anatomists.
Agbetoba, Abib; Luong, Amber; Siow, Jin Keat; Senior, Brent; Callejas, Claudio; Szczygielski, Kornel; Citardi, Martin J
2017-02-01
Endoscopic sinus surgery represents a cornerstone in the professional development of otorhinolaryngology trainees. Mastery of these surgical skills requires an understanding of paranasal sinus and skull-base anatomy. The frontal sinus is associated with a wide range of variation and complex anatomical configuration, and thus represents an important challenge for all trainees performing endoscopic sinus surgery. Forty-five otorhinolaryngology trainees and 20 medical school students from 5 academic institutions were enrolled and randomized into 1 of 2 groups. Each subject underwent learning of frontal recess anatomy with both traditional 2-dimensional (2D) learning methods using a standard Digital Imaging and Communications in Medicine (DICOM) viewing software (RadiAnt Dicom Viewer Version 1.9.16) and 3-dimensional (3D) learning utilizing a novel preoperative virtual planning software (Scopis Building Blocks), with one half learning with the 2D method first and the other half learning with the 3D method first. Four questionnaires that included a total of 20 items were scored for subjects' self-assessment on knowledge of frontal recess and frontal sinus drainage pathway anatomy following each learned modality. A 2-sample Wilcoxon rank-sum test was used in the statistical analysis comparing the 2 groups. Most trainees (89%) believed that the virtual 3D planning software significantly improved their understanding of the spatial orientation of the frontal sinus drainage pathway. Incorporation of virtual 3D planning surgical software may help augment trainees' understanding and spatial orientation of the frontal recess and sinus anatomy. The potential increase in trainee proficiency and comprehension theoretically may translate to improved surgical skill and patient outcomes and in reduced surgical time. © 2016 ARS-AAOA, LLC.
Learner Interaction Management in an Avatar and Chat-Based Virtual World
ERIC Educational Resources Information Center
Peterson, Mark
2006-01-01
In this paper, I report on the findings of a study that investigated non-native speaker interaction in a three dimensional (3D) virtual world that incorporates avatars and text chat known as "Active Worlds." Analysis of the chat transcripts indicated that the 24 intermediate level EFL participants were able to undertake a variety of tasks through…
NASA Technical Reports Server (NTRS)
1992-01-01
Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.
Use of three-dimensional computer graphic animation to illustrate cleft lip and palate surgery.
Cutting, C; Oliker, A; Haring, J; Dayan, J; Smith, D
2002-01-01
Three-dimensional (3D) computer animation is not commonly used to illustrate surgical techniques. This article describes the surgery-specific processes that were required to produce animations to teach cleft lip and palate surgery. Three-dimensional models were created using CT scans of two Chinese children with unrepaired clefts (one unilateral and one bilateral). We programmed several custom software tools, including an incision tool, a forceps tool, and a fat tool. Three-dimensional animation was found to be particularly useful for illustrating surgical concepts. Positioning the virtual "camera" made it possible to view the anatomy from angles that are impossible to obtain with a real camera. Transparency allows the underlying anatomy to be seen during surgical repair while maintaining a view of the overlaying tissue relationships. Finally, the representation of motion allows modeling of anatomical mechanics that cannot be done with static illustrations. The animations presented in this article can be viewed on-line at http://www.smiletrain.org/programs/virtual_surgery2.htm. Sophisticated surgical procedures are clarified with the use of 3D animation software and customized software tools. The next step in the development of this technology is the creation of interactive simulators that recreate the experience of surgery in a safe, digital environment. Copyright 2003 Wiley-Liss, Inc.
An optical tracking system for virtual reality
NASA Astrophysics Data System (ADS)
Hrimech, Hamid; Merienne, Frederic
2009-03-01
In this paper we present a low-cost 3D tracking system which we have developed and tested in order to move away from traditional 2D interaction techniques (keyboard and mouse) in an attempt to improve user's experience while using a CVE. Such a tracking system is used to implement 3D interaction techniques that augment user experience, promote user's sense of transportation in the virtual world as well as user's awareness of their partners. The tracking system is a passive optical tracking system using stereoscopy a technique allowing the reconstruction of three-dimensional information from a couple of images. We have currently deployed our 3D tracking system on a collaborative research platform for investigating 3D interaction techniques in CVEs.
ERIC Educational Resources Information Center
Jensen, Jens F.
This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…
Bolliger, Stephan A; Thali, Michael J; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter
2008-02-01
The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.
Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2017-02-01
This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
From experimental imaging techniques to virtual embryology.
Weninger, Wolfgang J; Tassy, Olivier; Darras, Sébastien; Geyer, Stefan H; Thieffry, Denis
2004-01-01
Modern embryology increasingly relies on descriptive and functional three dimensional (3D) and four dimensional (4D) analysis of physically, optically, or virtually sectioned specimens. To cope with the technical requirements, new methods for high detailed in vivo imaging, as well as the generation of high resolution digital volume data sets for the accurate visualisation of transgene activity and gene product presence, in the context of embryo morphology, were recently developed and are under construction. These methods profoundly change the scientific applicability, appearance and style of modern embryo representations. In this paper, we present an overview of the emerging techniques to create, visualise and administrate embryo representations (databases, digital data sets, 3-4D embryo reconstructions, models, etc.), and discuss the implications of these new methods on the work of modern embryologists, including, research, teaching, the selection of specific model organisms, and potential collaborators.
Oh, Hyun Jun; Yang, Il-Hyung
2016-01-01
Objectives: To propose a novel method for determining the three-dimensional (3D) root apex position of maxillary teeth using a two-dimensional (2D) panoramic radiograph image and a 3D virtual maxillary cast model. Methods: The subjects were 10 adult orthodontic patients treated with non-extraction. The multiple camera matrices were used to define transformative relationships between tooth images of the 2D panoramic radiographs and the 3D virtual maxillary cast models. After construction of the root apex-specific projective (RASP) models, overdetermined equations were used to calculate the 3D root apex position with a direct linear transformation algorithm and the known 2D co-ordinates of the root apex in the panoramic radiograph. For verification of the estimated 3D root apex position, the RASP and 3D-CT models were superimposed using a best-fit method. Then, the values of estimation error (EE; mean, standard deviation, minimum error and maximum error) between the two models were calculated. Results: The intraclass correlation coefficient values exhibited good reliability for the landmark identification. The mean EE of all root apices of maxillary teeth was 1.88 mm. The EE values, in descending order, were as follows: canine, 2.30 mm; first premolar, 1.93 mm; second premolar, 1.91 mm; first molar, 1.83 mm; second molar, 1.82 mm; lateral incisor, 1.80 mm; and central incisor, 1.53 mm. Conclusions: Camera calibration technology allows reliable determination of the 3D root apex position of maxillary teeth without the need for 3D-CT scan or tooth templates. PMID:26317151
Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B
2016-08-01
Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Journey to the centre of the cell: Virtual reality immersion into scientific data.
Johnston, Angus P R; Rae, James; Ariotti, Nicholas; Bailey, Benjamin; Lilja, Andrew; Webb, Robyn; Ferguson, Charles; Maher, Sheryl; Davis, Thomas P; Webb, Richard I; McGhee, John; Parton, Robert G
2018-02-01
Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a "real" cell. Early testing of this immersive environment indicates a significant improvement in students' understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Three-dimensional planning in craniomaxillofacial surgery
Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera
2016-01-01
Introduction: Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. Materials and Methods: We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients’ outcomes. Conclusions: 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results. PMID:28299272
Three-dimensional planning in craniomaxillofacial surgery.
Rubio-Palau, Josep; Prieto-Gundin, Alejandra; Cazalla, Asteria Albert; Serrano, Miguel Bejarano; Fructuoso, Gemma Garcia; Ferrandis, Francisco Parri; Baró, Alejandro Rivera
2016-01-01
Three-dimensional (3D) planning in oral and maxillofacial surgery has become a standard in the planification of a variety of conditions such as dental implants and orthognathic surgery. By using custom-made cutting and positioning guides, the virtual surgery is exported to the operating room, increasing precision and improving results. We present our experience in the treatment of craniofacial deformities with 3D planning. Software to plan the different procedures has been selected for each case, depending on the procedure (Nobel Clinician, Kodak 3DS, Simplant O&O, Dolphin 3D, Timeus, Mimics and 3-Matic). The treatment protocol is exposed step by step from virtual planning, design, and printing of the cutting and positioning guides to patients' outcomes. 3D planning reduces the surgical time and allows predicting possible difficulties and complications. On the other hand, it increases preoperative planning time and needs a learning curve. The only drawback is the cost of the procedure. At present, the additional preoperative work can be justified because of surgical time reduction and more predictable results. In the future, the cost and time investment will be reduced. 3D planning is here to stay. It is already a fact in craniofacial surgery and the investment is completely justified by the risk reduction and precise results.
Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J
2012-08-01
Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.
Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian
2016-05-01
Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1996-01-01
This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.
Hu, Ben; Kuang, Zheng-Kun; Feng, Shi-Yu; Wang, Dong; He, Song-Bing; Kong, De-Xin
2016-11-17
The crystallized ligands in the Protein Data Bank (PDB) can be treated as the inverse shapes of the active sites of corresponding proteins. Therefore, the shape similarity between a molecule and PDB ligands indicated the possibility of the molecule to bind with the targets. In this paper, we proposed a shape similarity profile that can be used as a molecular descriptor for ligand-based virtual screening. First, through three-dimensional (3D) structural clustering, 300 diverse ligands were extracted from the druggable protein-ligand database, sc-PDB. Then, each of the molecules under scrutiny was flexibly superimposed onto the 300 ligands. Superimpositions were scored by shape overlap and property similarity, producing a 300 dimensional similarity array termed the "Three-Dimensional Biologically Relevant Spectrum (BRS-3D)". Finally, quantitative or discriminant models were developed with the 300 dimensional descriptor using machine learning methods (support vector machine). The effectiveness of this approach was evaluated using 42 benchmark data sets from the G protein-coupled receptor (GPCR) ligand library and the GPCR decoy database (GLL/GDD). We compared the performance of BRS-3D with other 2D and 3D state-of-the-art molecular descriptors. The results showed that models built with BRS-3D performed best for most GLL/GDD data sets. We also applied BRS-3D in histone deacetylase 1 inhibitors screening and GPCR subtype selectivity prediction. The advantages and disadvantages of this approach are discussed.
NASA Astrophysics Data System (ADS)
McIntire, John P.; Wright, Steve T.; Harrington, Lawrence K.; Havig, Paul R.; Watamaniuk, Scott N. J.; Heft, Eric L.
2014-06-01
Twelve participants were tested on a simple virtual object precision placement task while viewing a stereoscopic three-dimensional (S3-D) display. Inclusion criteria included uncorrected or best corrected vision of 20/20 or better in each eye and stereopsis of at least 40 arc sec using the Titmus stereotest. Additionally, binocular function was assessed, including measurements of distant and near phoria (horizontal and vertical) and distant and near horizontal fusion ranges using standard optometric clinical techniques. Before each of six 30 min experimental sessions, measurements of phoria and fusion ranges were repeated using a Keystone View Telebinocular and an S3-D display, respectively. All participants completed experimental sessions in which the task required the precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the simulator sickness questionnaire. Individual placement accuracy in S3-D trials was significantly correlated with several of the binocular screening outcomes: viewers with larger convergent fusion ranges (measured at near distance), larger total fusion ranges (convergent plus divergent ranges, measured at near distance), and/or lower (better) stereoscopic acuity thresholds were more accurate on the placement task. No screening measures were predictive of subjective discomfort, perhaps due to the low levels of discomfort induced.
ERIC Educational Resources Information Center
Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei
2017-01-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
3D Image Display Courses for Information Media Students.
Yanaka, Kazuhisa; Yamanouchi, Toshiaki
2016-01-01
Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.
3D printing from cardiovascular CT: a practical guide and review
Birbara, Nicolette S.; Hussain, Tarique; Greil, Gerald; Foley, Thomas A.; Pather, Nalini
2017-01-01
Current cardiovascular imaging techniques allow anatomical relationships and pathological conditions to be captured in three dimensions. Three-dimensional (3D) printing, or rapid prototyping, has also become readily available and made it possible to transform virtual reconstructions into physical 3D models. This technology has been utilised to demonstrate cardiovascular anatomy and disease in clinical, research and educational settings. In particular, 3D models have been generated from cardiovascular computed tomography (CT) imaging data for purposes such as surgical planning and teaching. This review summarises applications, limitations and practical steps required to create a 3D printed model from cardiovascular CT. PMID:29255693
NASA Astrophysics Data System (ADS)
Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot
2013-03-01
Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.
Enciso, R; Memon, A; Mah, J
2003-01-01
The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw.
2006-05-11
examined. These data were processed by the Automatic Real Time Ionogram Scaler with True Height ( ARTIST ) [Reinisch and Huang, 1983] program into electron...IDA3D. The data is locally available and previously quality checked. In addition, IDA3D maps using ARTIST -calculated profiles from hand scaled...ionograms are available for comparison. The first test run of the IDA3D used only O-mode autoscaled virtual height profiles from five different digisondes
Three-dimensional simulation, surgical navigation and thoracoscopic lung resection
Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa
2013-01-01
This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426
Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke
2018-01-01
Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.
Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.
Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N
2018-05-01
Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.
Virtually fabricated guide for placement of the C-tube miniplate.
Paek, Janghyun; Jeong, Do-Min; Kim, Yong; Kim, Seong-Hun; Chung, Kyu-Rhim; Nelson, Gerald
2014-05-01
This paper introduces a virtually planned and stereolithographically fabricated guiding system that will allow the clinician to plan carefully for the best location of the device and to achieve an accurate position without complications. The scanned data from preoperative dental casts were edited to obtain preoperative 3-dimensional (3D) virtual models of the dentition. After the 3D virtual models were repositioned, the 3D virtual surgical guide was fabricated. A surgical guide was created onscreen, and then these virtual guides were materialized into real ones using the stereolithographic technique. Whereas the previously described guide required laboratory work to be performed by the orthodontist, our technique is more convenient because the laboratory work is done remotely by computer-aided design/computer-aided manufacturing technology. Because the miniplate is firmly held in place as the patient holds his or her mandibular teeth against the occlusal pad of the surgical guide, there is no risk that the miniscrews can slide on the bone surface during placement. The software program (2.5-dimensional software) in this study combines 2-dimensional cephalograms with 3D virtual dental models. This software is an effective and efficient alternative to 3D software when 3D computed tomography data are not available. To confidently and safely place a miniplate with screw fixation, a simple customized guide for an orthodontic miniplate was introduced. The use of a custom-made, rigid guide when placing miniplates will minimize complications such as vertical mislocation or slippage of the miniplate during placement. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.
Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German
2014-01-01
Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. © 2014 American Association of Anatomists.
Dixon, Melissa W; Proffitt, Dennis R
2002-01-01
One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.
NASA Technical Reports Server (NTRS)
Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)
2002-01-01
One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.
2008-01-01
The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473
Hansen, Margaret M
2008-09-01
The author provides a critical overview of three-dimensional (3-D) virtual worlds and "serious gaming" that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger's Diffusion of Innovations Theory and Siemens' Connectivism Theory for today's learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare.
Virtual viewpoint generation for three-dimensional display based on the compressive light field
NASA Astrophysics Data System (ADS)
Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.
How 3D immersive visualization is changing medical diagnostics
NASA Astrophysics Data System (ADS)
Koning, Anton H. J.
2011-03-01
Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning
2009-09-01
To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.
Virtual Environment for Surgical Room of the Future.
1995-10-01
Design; 1. wire frame Dynamic Interaction 2. surface B. Acoustic Three-Dimensional Modeling; 3. solid based on radiosity modeling B. Dynamic...infection control of people and E. Rendering and Shadowing equipment 1. ray tracing D. Fluid Flow 2. radiosity F. Animation OBJECT RECOGNITION COMMUNICATION
Bol Raap, Goris; Koning, Anton H J; Scohy, Thierry V; ten Harkel, A Derk-Jan; Meijboom, Folkert J; Kappetein, A Pieter; van der Spek, Peter J; Bogers, Ad J J C
2007-02-16
This study was done to investigate the potential additional role of virtual reality, using three-dimensional (3D) echocardiographic holograms, in the postoperative assessment of tricuspid valve function after surgical closure of ventricular septal defect (VSD). 12 data sets from intraoperative epicardial echocardiographic studies in 5 operations (patient age at operation 3 weeks to 4 years and bodyweight at operation 3.8 to 17.2 kg) after surgical closure of VSD were included in the study. The data sets were analysed as two-dimensional (2D) images on the screen of the ultrasound system as well as holograms in an I-space virtual reality (VR) system. The 2D images were assessed for tricuspid valve function. In the I-Space, a 6 degrees-of-freedom controller was used to create the necessary projectory positions and cutting planes in the hologram. The holograms were used for additional assessment of tricuspid valve leaflet mobility. All data sets could be used for 2D as well as holographic analysis. In all data sets the area of interest could be identified. The 2D analysis showed no tricuspid valve stenosis or regurgitation. Leaflet mobility was considered normal. In the virtual reality of the I-Space, all data sets allowed to assess the tricuspid leaflet level in a single holographic representation. In 3 holograms the septal leaflet showed restricted mobility that was not appreciated in the 2D echocardiogram. In 4 data sets the posterior leaflet and the tricuspid papillary apparatus were not completely included. This report shows that dynamic holographic imaging of intraoperative postoperative echocardiographic data regarding tricuspid valve function after VSD closure is feasible. Holographic analysis allows for additional tricuspid valve leaflet mobility analysis. The large size of the probe, in relation to small size of the patient, may preclude a complete data set. At the moment the requirement of an I-Space VR system limits the applicability in virtual reality 3D echocardiography in clinical practice.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-07-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-01-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760
Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.
Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-01-01
This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.
A three-dimensional virtual environment for modeling mechanical cardiopulmonary interactions.
Kaye, J M; Primiano, F P; Metaxas, D N
1998-06-01
We have developed a real-time computer system for modeling mechanical physiological behavior in an interactive, 3-D virtual environment. Such an environment can be used to facilitate exploration of cardiopulmonary physiology, particularly in situations that are difficult to reproduce clinically. We integrate 3-D deformable body dynamics with new, formal models of (scalar) cardiorespiratory physiology, associating the scalar physiological variables and parameters with the corresponding 3-D anatomy. Our framework enables us to drive a high-dimensional system (the 3-D anatomical models) from one with fewer parameters (the scalar physiological models) because of the nature of the domain and our intended application. Our approach is amenable to modeling patient-specific circumstances in two ways. First, using CT scan data, we apply semi-automatic methods for extracting and reconstructing the anatomy to use in our simulations. Second, our scalar physiological models are defined in terms of clinically measurable, patient-specific parameters. This paper describes our approach, problems we have encountered and a sample of results showing normal breathing and acute effects of pneumothoraces.
Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java
NASA Astrophysics Data System (ADS)
Cao, Zaihui; hu, Zhongyan
Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.
NASA Astrophysics Data System (ADS)
Canevese, E. P.; De Gottardo, T.
2017-05-01
The morphometric and photogrammetric knowledge, combined with the historical research, are the indispensable prerequisites for the protection and enhancement of historical, architectural and cultural heritage. Nowadays the use of BIM (Building Information Modeling) as a supporting tool for restoration and conservation purposes is becoming more and more popular. However this tool is not fully adequate in this context because of its simplified representation of three-dimensional models, resulting from solid modelling techniques (mostly used in virtual reality) causing the loss of important morphometric information. One solution to this problem is imagining new advanced tools and methods that enable the building of effective and efficient three-dimensional representations backing the correct geometric analysis of the built model. Twenty-year of interdisciplinary research activities implemented by Virtualgeo focused on developing new methods and tools for 3D modeling that go beyond the simplified digital-virtual reconstruction used in standard solid modeling. Methods and tools allowing the creation of informative and true to life three-dimensional representations, that can be further used by various academics or industry professionals to carry out diverse analysis, research and design activities. Virtualgeo applied research activities, in line with the European Commission 2013's directives of Reflective 7 - Horizon 2020 Project, gave birth to GeomaticsCube Ecosystem, an ecosystem resulting from different technologies based on experiences garnered from various fields, metrology in particular, a discipline used in the automotive and aviation industry, and in general mechanical engineering. The implementation of the metrological functionality is only possible if the 3D model is created with special modeling techniques, based on surface modeling that allow, as opposed to solid modeling, a 3D representation of the manufact that is true to life. The advantages offered by metrological analysis are varied and important because they permit a precise and detailed overview of the 3D model's characteristics, and especially the over time monitoring of the model itself, these informations are impossible to obtain from a three-dimensional representation produced with solid modelling techniques. The applied research activities are also focused on the possibility of obtaining a photogrammetric and informative 3D model., Two distinct applications have been developed for this purpose, the first allows the classification of each individual element and the association of its material characteristics during the 3D modelling phase, whilst the second allows segmentations of the photogrammetric 3D model in its diverse aspects (materic, related to decay, chronological) with the possibility to make use and to populate the database, associated with the 3D model, with all types of multimedia contents.
A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.
Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe
2008-08-05
The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.
NASA Astrophysics Data System (ADS)
Wang, Hujun; Liu, Jinghua; Zheng, Xu; Rong, Xiaohui; Zheng, Xuwei; Peng, Hongyu; Silber-Li, Zhanghua; Li, Mujun; Liu, Liyu
2015-06-01
Percutaneous coronary intervention (PCI), especially coronary stent implantation, has been shown to be an effective treatment for coronary artery disease. However, in-stent restenosis is one of the longstanding unsolvable problems following PCI. Although stents implanted inside narrowed vessels recover normal flux of blood flows, they instantaneously change the wall shear stress (WSS) distribution on the vessel surface. Improper stent implantation positions bring high possibilities of restenosis as it enlarges the low WSS regions and subsequently stimulates more epithelial cell outgrowth on vessel walls. To optimize the stent position for lowering the risk of restenosis, we successfully established a digital three-dimensional (3-D) model based on a real clinical coronary artery and analysed the optimal stenting strategies by computational simulation. Via microfabrication and 3-D printing technology, the digital model was also converted into in vitro microfluidic models with 3-D micro channels. Simultaneously, physicians placed real stents inside them; i.e., they performed “virtual surgeries”. The hydrodynamic experimental results showed that the microfluidic models highly inosculated the simulations. Therefore, our study not only demonstrated that the half-cross stenting strategy could maximally reduce restenosis risks but also indicated that 3-D printing combined with clinical image reconstruction is a promising method for future angiocardiopathy research.
Three-dimensional (3D) printed endovascular simulation models: a feasibility study.
Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob
2017-02-01
Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.
Virtual planning in orthognathic surgery.
Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T
2014-08-01
Numerous publications regarding virtual surgical planning protocols have been published, most reporting only one or two case reports to emphasize the hands-on planning. None have systematically reviewed the data published from clinical trials. This systematic review analyzes the precision and accuracy of three-dimensional (3D) virtual surgical planning of orthognathic procedures compared with the actual surgical outcome following orthognathic surgery reported in clinical trials. A systematic search of the current literature was conducted to identify clinical trials with a sample size of more than five patients, comparing the virtual surgical plan with the actual surgical outcome. Search terms revealed a total of 428 titles, out of which only seven articles were included, with a combined sample size of 149 patients. Data were presented in three different ways: intra-class correlation coefficient, 3D surface area with a difference <2mm, and linear and angular differences in three dimensions. Success criteria were set at 2mm mean difference in six articles; 125 of the 133 patients included in these articles were regarded as having had a successful outcome. Due to differences in the presentation of data, meta-analysis was not possible. Virtual planning appears to be an accurate and reproducible method for orthognathic treatment planning. A more uniform presentation of the data is necessary to allow the performance of a meta-analysis. Currently, the software system most often used for 3D virtual planning in clinical trials is SimPlant (Materialise). More independent clinical trials are needed to further validate the precision of virtual planning. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. All rights reserved.
Ferng, Alice S; Oliva, Isabel; Jokerst, Clinton; Avery, Ryan; Connell, Alana M; Tran, Phat L; Smith, Richard G; Khalpey, Zain
2017-08-01
Since the creation of SynCardia's 50 cc Total Artificial Hearts (TAHs), patients with irreversible biventricular failure now have two sizing options. Herein, a case series of three patients who have undergone successful 50 and 70 cc TAH implantation with complete closure of the chest cavity utilizing preoperative "virtual implantation" of different sized devices for surgical planning are presented. Computed tomography (CT) images were used for preoperative planning prior to TAH implantation. Three-dimensional (3D) reconstructions of preoperative chest CT images were generated and both 50 and 70 cc TAHs were virtually implanted into patients' thoracic cavities. During the simulation, the TAHs were projected over the native hearts in a similar position to the actual implantation, and the relationship between the devices and the atria, ventricles, chest wall, and diaphragm were assessed. The 3D reconstructed images and virtual modeling were used to simulate and determine for each patient if the 50 or 70 cc TAH would have a higher likelihood of successful implantation without complications. Subsequently, all three patients received clinical implants of the properly sized TAH based on virtual modeling, and their chest cavities were fully closed. This virtual implantation increases our confidence that the selected TAH will better fit within the thoracic cavity allowing for improved surgical outcome. Clinical implantation of the TAHs showed that our virtual modeling was an effective method for determining the correct fit and sizing of 50 and 70 cc TAHs. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Enhanced LOD Concepts for Virtual 3d City Models
NASA Astrophysics Data System (ADS)
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
Three-dimensional displacement measurement of image point by point-diffraction interferometry
NASA Astrophysics Data System (ADS)
He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei
2018-01-01
This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Wu, Xin-Bao; Wang, Jun-Qiang; Zhao, Chun-Peng; Sun, Xu; Shi, Yin; Zhang, Zi-An; Li, Yu-Neng; Wang, Man-Yi
2015-02-20
Old pelvis fractures are among the most challenging fractures to treat because of their complex anatomy, difficult-to-access surgical sites, and the relatively low incidence of such cases. Proper evaluation and surgical planning are necessary to achieve the pelvic ring symmetry and stable fixation of the fracture. The goal of this study was to assess the use of three-dimensional (3D) printing techniques for surgical management of old pelvic fractures. First, 16 dried human cadaveric pelvises were used to confirm the anatomical accuracy of the 3D models printed based on radiographic data. Next, nine clinical cases between January 2009 and April 2013 were used to evaluate the surgical reconstruction based on the 3D printed models. The pelvic injuries were all type C, and the average time from injury to reconstruction was 11 weeks (range: 8-17 weeks). The workflow consisted of: (1) Printing patient-specific bone models based on preoperative computed tomography (CT) scans, (2) virtual fracture reduction using the printed 3D anatomic template, (3) virtual fracture fixation using Kirschner wires, and (4) preoperatively measuring the osteotomy and implant position relative to landmarks using the virtually defined deformation. These models aided communication between surgical team members during the procedure. This technique was validated by comparing the preoperative planning to the intraoperative procedure. The accuracy of the 3D printed models was within specification. Production of a model from standard CT DICOM data took 7 hours (range: 6-9 hours). Preoperative planning using the 3D printed models was feasible in all cases. Good correlation was found between the preoperative planning and postoperative follow-up X-ray in all nine cases. The patients were followed for 3-29 months (median: 5 months). The fracture healing time was 9-17 weeks (mean: 10 weeks). No delayed incision healing, wound infection, or nonunions occurred. The results were excellent in two cases, good in five, and poor in two based on the Majeed score. The 3D printing planning technique for pelvic surgery was successfully integrated into a clinical workflow to improve patient-specific preoperative planning by providing a visual and haptic model of the injury and allowing patient-specific adaptation of each osteosynthesis implant to the virtually reduced pelvis.
Advanced Visualization of Experimental Data in Real Time Using LiveView3D
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2006-01-01
LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.
A collaborative virtual reality environment for neurosurgical planning and training.
Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel
2007-11-01
We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-12-21
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang
2013-09-01
Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.
Fujimoto, Koya; Shiinoki, Takehiro; Yuasa, Yuki; Hanazawa, Hideki; Shibuya, Keiko
2017-06-01
A commercially available bolus ("commercial-bolus") does not make complete contact with the irregularly shaped patient skin. This study aims to customise a patient-specific three-dimensional (3D) bolus using a 3D printing technique ("3D-bolus") and to evaluate its clinical feasibility for photon radiotherapy. The 3D-bolus was designed using a treatment planning system (TPS) in Digital Imaging and Communications in Medicine-Radiotherapy (DICOM-RT) format, and converted to stereolithographic format for printing. To evaluate its physical characteristics, treatment plans were created for water-equivalent phantoms that were bolus-free, or had a flat-form printed 3D-bolus, a TPS-designed bolus ("virtual-bolus"), or a commercial-bolus. These plans were compared based on the percentage depth dose (PDD) and target-volume dose volume histogram (DVH) measurements. To evaluate the clinical feasibility, treatment plans were created for head phantoms that were bolus-free or had a 3D-bolus, a virtual-bolus, or a commercial-bolus. These plans were compared based on the target volume DVH. In the physical evaluation, the 3D-bolus provided effective dose coverage in the build-up region, which was equivalent to the commercial-bolus. With regard to the clinical feasibility, the air gaps were lesser with the 3D-bolus when compared to the commercial-bolus. Furthermore, the prescription dose could be delivered appropriately to the target volume. The 3D-bolus has potential use for air-gap reduction compared to the commercial-bolus and facilitates target-volume dose coverage and homogeneity improvement. A 3D-bolus produced using a 3D printing technique is comparable to a commercial-bolus applied to an irregular-shaped skin surface. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Trang, Vu Thi Thu; Park, Jae Hyun; Bayome, Mohamed; Shastry, Shruti; Mellion, Alex; Kook, Yoon-Ah
2015-01-01
The purpose of this study was to investigate the three-dimensional (3D) morphologic differences in the mandibular arch of Vietnamese and North American White subjects. The sample included 113 Vietnamese subjects (41 Class I, 37 Class II and 35 Class III) and 96 White subjects (29 Class I, 30 Class II and 37 Class III). The samples were regrouped according to arch form types (tapered, ovoid, and square) to compare the frequency distribution of the three arch forms between ethnic groups in each angle classification. The facial axis point of each tooth was digitized on 3D virtual models. Four linear and two ratio variables were measured. In comparing arch dimensions, the intercanine and intermolar widths were wider in Vietnamese than in Whites (p < 0.001, p = 0.042, respectively). In the White group, there was even frequency distribution of the three arch forms. However, in the Vietnamese group, the square arch form was the most frequent followed by tapered and ovoid arch forms. The arch forms of Whites were narrower than Vietnamese. In North American Whites, the distribution of the arch form types showed similar frequency. In Vietnamese, the square arch form was more frequent.
[Preparation of simulate craniocerebral models via three dimensional printing technique].
Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T
2016-08-09
Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.
Do Haptic Representations Help Complex Molecular Learning?
ERIC Educational Resources Information Center
Bivall, Petter; Ainsworth, Shaaron; Tibell, Lena A. E.
2011-01-01
This study explored whether adding a haptic interface (that provides users with somatosensory information about virtual objects by force and tactile feedback) to a three-dimensional (3D) chemical model enhanced students' understanding of complex molecular interactions. Two modes of the model were compared in a between-groups pre- and posttest…
ERIC Educational Resources Information Center
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
Import and visualization of clinical medical imagery into multiuser VR environments
NASA Astrophysics Data System (ADS)
Mehrle, Andreas H.; Freysinger, Wolfgang; Kikinis, Ron; Gunkel, Andreas; Kral, Florian
2005-03-01
The graphical representation of three-dimensional data obtained from tomographic imaging has been the central problem since this technology is available. Neither the representation as a set of two-dimensional slices nor the 2D projection of three-dimensional models yields satisfactory results. In this paper a way is outlined which permits the investigation of volumetric clinical data obtained from standard CT, MR, PET, SPECT or experimental very high resolution CT-scanners in a three dimensional environment within a few worksteps. Volumetric datasets are converted into surface data (segmentation process) using the 3D-Slicer software tool and saved as .vtk files and exported as a collection of primitives in any common file format (.iv, .pfb). Subsequently this files can be displayed and manipulated in the CAVE virtual reality center. The CAVE is a multiuser walkable virtual room consisting of several walls on which stereoscopic images are projected by rear panel beamers. Adequate tracking of the head position and separate image calculation for each eye yields a vivid impression for one or several users. With the use of a seperately tracked 6D joystick manipulations such as rotation, translation, zooming, decomposition or highlighting can be done intuitively. The usage of the CAVE technology opens new possibilities especially in surgical training ("hands-on-effect") and as an educational tool (availability of pathological data). Unlike concurring technologies the CAVE permits a walk-through into the virtual scene but preserves enough physical perception to allow interaction between multiple users, e.g. gestures and movements. By training in a virtual environment on one hand the learning process of students in complex anatomic findings may be improved considerably and on the other hand unaccustomed views such as the one through a microscope or endoscope can be trained in advance. The availability of low-cost PC based CAVE-like systems and the rapidly decreasing price of high-performance video beamers makes the CAVE an affordable alternative to conventional surgical training techniques and without limitations in handling cadavers.
Enhancing the Induction Skill of Deaf and Hard-of-Hearing Children with Virtual Reality Technology.
Passig, D; Eden, S
2000-01-01
Many researchers have found that for reasoning and reaching a reasoned conclusion, particularly when the process of induction is required, deaf and hard-of-hearing children have unusual difficulty. The purpose of this study was to investigate whether the practice of rotating virtual reality (VR) three-dimensional (3D) objects will have a positive effect on the ability of deaf and hard-of-hearing children to use inductive processes when dealing with shapes. Three groups were involved in the study: (1) experimental group, which included 21 deaf and hard-of-hearing children, who played a VR 3D game; (2) control group I, which included 23 deaf and hard-of-hearing children, who played a similar two-dimensional (2D) game (not VR game); and (3) control group II of 16 hearing children for whom no intervention was introduced. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved inductive thinking used by the experimental group for shapes as compared with the first control group, who did not significantly improve their performance. Also, prior to the VR 3D experience, the deaf and hard-of-hearing children attained lower scores in inductive abilities than the children with normal hearing, (control group II). The results for the experimental group, after the VR 3D experience, improved to the extent that there was no noticeable difference between them and the children with normal hearing.
Explore the virtual side of earth science
,
1998-01-01
Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).
Faulwetter, Sarah; Chatzinikolaou, Eva; Michalakis, Nikitas; Filiopoulou, Irene; Minadakis, Nikos; Panteri, Emmanouela; Perantinos, George; Gougousis, Alexandros; Arvanitidis, Christos
2016-01-01
Abstract Background During recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CTvlab) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet. New information The Micro-CTvlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CTvlab can be accessed either on a normal computer or through a dedicated version for mobile devices. PMID:27956848
Keklikoglou, Kleoniki; Faulwetter, Sarah; Chatzinikolaou, Eva; Michalakis, Nikitas; Filiopoulou, Irene; Minadakis, Nikos; Panteri, Emmanouela; Perantinos, George; Gougousis, Alexandros; Arvanitidis, Christos
2016-01-01
During recent years, X-ray microtomography (micro-CT) has seen an increasing use in biological research areas, such as functional morphology, taxonomy, evolutionary biology and developmental research. Micro-CT is a technology which uses X-rays to create sub-micron resolution images of external and internal features of specimens. These images can then be rendered in a three-dimensional space and used for qualitative and quantitative 3D analyses. However, the online exploration and dissemination of micro-CT datasets are rarely made available to the public due to their large size and a lack of dedicated online platforms for the interactive manipulation of 3D data. Here, the development of a virtual micro-CT laboratory (Micro-CT vlab ) is described, which can be used by everyone who is interested in digitisation methods and biological collections and aims at making the micro-CT data exploration of natural history specimens freely available over the internet. The Micro-CT vlab offers to the user virtual image galleries of various taxa which can be displayed and downloaded through a web application. With a few clicks, accurate, detailed and three-dimensional models of species can be studied and virtually dissected without destroying the actual specimen. The data and functions of the Micro-CT vlab can be accessed either on a normal computer or through a dedicated version for mobile devices.
Student performance and appreciation using 3D vs. 2D vision in a virtual learning environment.
de Boer, I R; Wesselink, P R; Vervoorn, J M
2016-08-01
The aim of this study was to investigate the differences in the performance and appreciation of students working in a virtual learning environment with two (2D)- or three (3D)-dimensional vision. One hundred and twenty-four randomly divided first-year dental students performed a manual dexterity exercise on the Simodont dental trainer with an automatic assessment. Group 1 practised in 2D vision and Group 2 in 3D. All of the students practised five times for 45 min and then took a test using the vision they had practised in. After test 1, all of the students switched the type of vision to control for the learning curve: Group 1 practised in 3D and took a test in 3D, whilst Group 2 practised in 2D and took the test in 2D. To pass, three of five exercises had to be successfully completed within a time limit. The students filled out a questionnaire after completing test 2. The results show that students working with 3D vision achieved significantly better results than students who worked in 2D. Ninety-five per cent of the students filled out the questionnaire, and over 90 per cent preferred 3D vision. The use of 3D vision in a virtual learning environment has a significant positive effect on the performance of the students as well as on their appreciation of the environment. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.
The virtual craniofacial patient: 3D jaw modeling and animation.
Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James
2003-01-01
In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).
Estimating Three-Dimensional Orientation of Human Body Parts by Inertial/Magnetic Sensing
Sabatini, Angelo Maria
2011-01-01
User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation. PMID:22319365
Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing.
Sabatini, Angelo Maria
2011-01-01
User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation.
Accuracy of open-source software segmentation and paper-based printed three-dimensional models.
Szymor, Piotr; Kozakiewicz, Marcin; Olszewski, Raphael
2016-02-01
In this study, we aimed to verify the accuracy of models created with the help of open-source Slicer 3.6.3 software (Surgical Planning Lab, Harvard Medical School, Harvard University, Boston, MA, USA) and the Mcor Matrix 300 paper-based 3D printer. Our study focused on the accuracy of recreating the walls of the right orbit of a cadaveric skull. Cone beam computed tomography (CBCT) of the skull was performed (0.25-mm pixel size, 0.5-mm slice thickness). Acquired DICOM data were imported into Slicer 3.6.3 software, where segmentation was performed. A virtual model was created and saved as an .STL file and imported into Netfabb Studio professional 4.9.5 software. Three different virtual models were created by cutting the original file along three different planes (coronal, sagittal, and axial). All models were printed with a Selective Deposition Lamination Technology Matrix 300 3D printer using 80 gsm A4 paper. The models were printed so that their cutting plane was parallel to the paper sheets creating the model. Each model (coronal, sagittal, and axial) consisted of three separate parts (∼200 sheets of paper each) that were glued together to form a final model. The skull and created models were scanned with a three-dimensional (3D) optical scanner (Breuckmann smart SCAN) and were saved as .STL files. Comparisons of the orbital walls of the skull, the virtual model, and each of the three paper models were carried out with GOM Inspect 7.5SR1 software. Deviations measured between the models analysed were presented in the form of a colour-labelled map and covered with an evenly distributed network of points automatically generated by the software. An average of 804.43 ± 19.39 points for each measurement was created. Differences measured in each point were exported as a .csv file. The results were statistically analysed using Statistica 10, with statistical significance set at p < 0.05. The average number of points created on models for each measurement was 804.43 ± 19.39; however, deviation in some of the generated points could not be calculated, and those points were excluded from further calculations. From 94% to 99% of the measured absolute deviations were <1 mm. The mean absolute deviation between the skull and virtual model was 0.15 ± 0.11 mm, between the virtual and printed models was 0.15 ± 0.12 mm, and between the skull and printed models was 0.24 ± 0.21 mm. Using the optical scanner and specialized inspection software for measurements of accuracy of the created parts is recommended, as it allows one not only to measure 2-dimensional distances between anatomical points but also to perform more clinically suitable comparisons of whole surfaces. However, it requires specialized software and a very accurate scanner in order to be useful. Threshold-based, manually corrected segmentation of orbital walls performed with 3D Slicer software is accurate enough to be used for creating a virtual model of the orbit. The accuracy of the paper-based Mcor Matrix 300 3D printer is comparable to those of other commonly used 3-dimensional printers and allows one to create precise anatomical models for clinical use. The method of dividing the model into smaller parts and sticking them together seems to be quite accurate, although we recommend it only for creating small, solid models with as few parts as possible to minimize shift associated with gluing. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L. (Principal Investigator)
1995-01-01
This report describes the process made during the first year of a three-year Cooperative Research Agreement (CRA NCC2-542). The CRA proposed a program of applied of psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years. we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners' head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on two of these topics, the role of head movements and the role of echoes and reflections, were reported in the most recent Semi-Annual Pro-ress Report (Appendix A). In the period since the last Progress Report we have been studying a third topic, the localizability of moving sources. The results of this research are described. The fidelity of a virtual auditory display is critically dependent on precise measurement of the listener''s Head-Related Transfer Functions (HRTFs), which are used to produce the virtual auditory images. We continue to explore methods for improving our HRTF measurement technique. During this reporting period we compared HRTFs measured using our standard open-canal probe tube technique and HRTFs measured with the closed-canal insert microphones from the Crystal River Engineering Snapshot system.
Three-dimensional measurement system for crime scene documentation
NASA Astrophysics Data System (ADS)
Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert
2017-10-01
Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.
High resolution three-dimensional photoacoustic imaging of human finger joints in vivo
NASA Astrophysics Data System (ADS)
Xi, Lei; Jiang, Huabei
2015-08-01
We present a method for noninvasively imaging the hand joints using a three-dimensional (3D) photoacoustic imaging (PAI) system. This 3D PAI system utilizes cylindrical scanning in data collection and virtual-detector concept in image reconstruction. The maximum lateral and axial resolutions of the PAI system are 70 μm and 240 μm. The cross-sectional photoacoustic images of a healthy joint clearly exhibited major internal structures including phalanx and tendons, which are not available from the current photoacoustic imaging methods. The in vivo PAI results obtained are comparable with the corresponding 3.0 T MRI images of the finger joint. This study suggests that the proposed method has the potential to be used in early detection of joint diseases such as osteoarthritis.
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-01-01
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q2 = 0.621, r2pred = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained. PMID:26035757
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-05-29
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q(2) = 0.621, r(2)(pred) = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained.
Synfograms: a new generation of holographic applications
NASA Astrophysics Data System (ADS)
Meulien Öhlmann, Odile; Öhlmann, Dietmar; Zacharovas, Stanislovas J.
2008-04-01
The new synthetic Four-dimensional printing technique (Syn4D) Synfogram is introducing time (animation) into spatial configuration of the imprinted three-dimensional shapes. While lenticular solutions offer 2 to 9 stereoscopic images Syn4D offers large format, full colors true 3D visualization printing of 300 to 2500 frames imprinted as holographic dots. This past 2 years Syn4D high-resolution displays proved to be extremely efficient for museums presentation, engineering design, automobile prototyping, and advertising virtual presentation as well as, for portrait and fashion applications. The main advantages of syn4D is that it offers a very easy way of using a variety of digital media, like most of 3D Modelling programs, 3D scan system, video sequences, digital photography, tomography as well as the Syn4D camera track system for life recording of spatial scenes changing in time. The use of digital holographic printer in conjunction with Syn4D image acquiring and processing devices separates printing and imaging creation in such a way that makes four-dimensional printing similar to a conventional digital photography processes where imaging and printing are usually separated in space and time. Besides making content easy to prepare, Syn4D has also developed new display and lighting solutions for trade show, museum, POP, merchandising, etc. The introduction of Synfograms is opening new applications for real life and virtual 4D displays. In this paper we will analyse the 3D market, the properties of the Synfograms and specific applications, the problems we encounter, solutions we find, discuss about customers demand and need for new product development.
Pfaff, Miles J; Steinbacher, Derek M
2016-03-01
Three-dimensional analysis and planning is a powerful tool in plastic and reconstructive surgery, enabling improved diagnosis, patient education and communication, and intraoperative transfer to achieve the best possible results. Three-dimensional planning can increase efficiency and accuracy, and entails five core components: (1) analysis, (2) planning, (3) virtual surgery, (4) three-dimensional printing, and (5) comparison of planned to actual results. The purpose of this article is to provide an overview of three-dimensional virtual planning and to provide a framework for applying these systems to clinical practice. Therapeutic, V.
Web-based interactive 3D visualization as a tool for improved anatomy learning.
Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan
2009-01-01
Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.
Use of a Three-Dimensional Virtual Environment to Teach Drug-Receptor Interactions
Bracegirdle, Luke; McLachlan, Sarah I.H.; Chapman, Stephen R.
2013-01-01
Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics. Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na+-K+ ATPase, and the nicotinic acetylcholine receptor. Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students’ perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course. Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics. PMID:23459131
Use of a three-dimensional virtual environment to teach drug-receptor interactions.
Richardson, Alan; Bracegirdle, Luke; McLachlan, Sarah I H; Chapman, Stephen R
2013-02-12
Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics.Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na(+)-K(+) ATPase, and the nicotinic acetylcholine receptor.Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students' perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course.Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics.
Hara, Shingo; Mitsugi, Masaharu; Kanno, Takahiro; Nomachi, Akihiko; Wajima, Takehiko; Tatemoto, Yukihiro
2013-09-01
This article describes a case we experienced in which good postsurgical facial profiles were obtained for a patient with jaw deformities associated with facial asymmetry, by implementing surgical planning with SimPlant OMS. Using this method, we conducted LF1 osteotomy, intraoral vertical ramus osteotomy (IVRO), sagittal split ramus osteotomy (SSRO), mandibular constriction and mandibular border genioplasty. Not only did we obtain a class I occlusal relationship, but the complicated surgery also improved the asymmetry of the frontal view, as well as of the profile view, of the patient. The virtual operation using three-dimensional computed tomography (3D-CT) could be especially useful for the treatment of patients with jaw deformities associated with facial asymmetry.
Hara, Shingo; Mitsugi, Masaharu; Kanno, Takahiro; Nomachi, Akihiko; Wajima, Takehiko; Tatemoto, Yukihiro
2013-01-01
This article describes a case we experienced in which good postsurgical facial profiles were obtained for a patient with jaw deformities associated with facial asymmetry, by implementing surgical planning with SimPlant OMS. Using this method, we conducted LF1 osteotomy, intraoral vertical ramus osteotomy (IVRO), sagittal split ramus osteotomy (SSRO), mandibular constriction and mandibular border genioplasty. Not only did we obtain a class I occlusal relationship, but the complicated surgery also improved the asymmetry of the frontal view, as well as of the profile view, of the patient. The virtual operation using three-dimensional computed tomography (3D-CT) could be especially useful for the treatment of patients with jaw deformities associated with facial asymmetry. PMID:23907678
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
Virtual arthroscopy of the visible human female temporomandibular joint.
Ishimaru, T; Lew, D; Haller, J; Vannier, M W
1999-07-01
This study was designed to obtain views of the temporomandibular joint (TMJ) by means of computed arthroscopic simulation (virtual arthroscopy) using three-dimensional (3D) processing. Volume renderings of the TMJ from very thin cryosection slices of the Visible Human Female were taken off the Internet. Analyze(AVW) software (Biomedical Imaging Resource, Mayo Foundation, Rochester, MN) on a Silicon Graphics 02 workstation (Mountain View, CA) was then used to obtain 3D images and allow the navigation "fly-through" of the simulated joint. Good virtual arthroscopic views of the upper and lower joint spaces of both TMJs were obtained by fly-through simulation from the lateral and endaural sides. It was possible to observe the presence of a partial defect in the articular disc and an osteophyte on the condyle. Virtual arthroscopy provided visualization of regions not accessible to real arthroscopy. These results indicate that virtual arthroscopy will be a new technique to investigate the TMJ of the patient with TMJ disorders in the near future.
Lin, Wei-Shao; Harris, Bryan T; Phasuk, Kamolphob; Llop, Daniel R; Morton, Dean
2018-02-01
This clinical report describes a digital workflow using the virtual smile design approach augmented with a static 3-dimensional (3D) virtual patient with photorealistic appearance to restore maxillary central incisors by using computer-aided design and computer-aided manufacturing (CAD-CAM) monolithic lithium disilicate ceramic veneers. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Satarasinghe, Praveen; Hamilton, Kojo D; Tarver, Michael J; Buchanan, Robert J; Koltz, Michael T
2018-04-17
Utilization of pedicle screws (PS) for spine stabilization is common in spinal surgery. With reliance on visual inspection of anatomical landmarks prior to screw placement, the free-hand technique requires a high level of surgeon skill and precision. Three-dimensional (3D), computer-assisted virtual neuronavigation improves the precision of PS placement and minimization steps. Twenty-three patients with degenerative, traumatic, or neoplastic pathologies received treatment via a novel three-step PS technique that utilizes a navigated power driver in combination with virtual screw technology. (1) Following visualization of neuroanatomy using intraoperative CT, a navigated 3-mm match stick drill bit was inserted at an anatomical entry point with a screen projection showing a virtual screw. (2) A Navigated Stryker Cordless Driver with an appropriate tap was used to access the vertebral body through a pedicle with a screen projection again showing a virtual screw. (3) A Navigated Stryker Cordless Driver with an actual screw was used with a screen projection showing the same virtual screw. One hundred and forty-four consecutive screws were inserted using this three-step, navigated driver, virtual screw technique. Only 1 screw needed intraoperative revision after insertion using the three-step, navigated driver, virtual PS technique. This amounts to a 0.69% revision rate. One hundred percent of patients had intraoperative CT reconstructed images taken to confirm hardware placement. Pedicle screw placement utilizing the Stryker-Ziehm neuronavigation virtual screw technology with a three step, navigated power drill technique is safe and effective.
Virtual reality in radiology: virtual intervention
NASA Astrophysics Data System (ADS)
Harreld, Michael R.; Valentino, Daniel J.; Duckwiler, Gary R.; Lufkin, Robert B.; Karplus, Walter J.
1995-04-01
Intracranial aneurysms are the primary cause of non-traumatic subarachnoid hemorrhage. Morbidity and mortality remain high even with current endovascular intervention techniques. It is presently impossible to identify which aneurysms will grow and rupture, however hemodynamics are thought to play an important role in aneurysm development. With this in mind, we have simulated blood flow in laboratory animals using three dimensional computational fluid dynamics software. The data output from these simulations is three dimensional, complex and transient. Visualization of 3D flow structures with standard 2D display is cumbersome, and may be better performed using a virtual reality system. We are developing a VR-based system for visualization of the computed blood flow and stress fields. This paper presents the progress to date and future plans for our clinical VR-based intervention simulator. The ultimate goal is to develop a software system that will be able to accurately model an aneurysm detected on clinical angiography, visualize this model in virtual reality, predict its future behavior, and give insight into the type of treatment necessary. An associated database will give historical and outcome information on prior aneurysms (including dynamic, structural, and categorical data) that will be matched to any current case, and assist in treatment planning (e.g., natural history vs. treatment risk, surgical vs. endovascular treatment risks, cure prediction, complication rates).
A Second Life for eHealth: Prospects for the Use of 3-D Virtual Worlds in Clinical Psychology
Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe
2008-01-01
The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed. PMID:18678557
Flügge, Tabea Viktoria; Nelson, Katja; Schmelzeisen, Rainer; Metzger, Marc Christian
2013-08-01
To present an efficient workflow for the production of implant drilling guides using virtual planning tools. For this purpose, laser surface scanning, cone beam computed tomography, computer-aided design and manufacturing, and 3-dimensional (3D) printing were combined. Intraoral optical impressions (iTero, Align Technologies, Santa Clara, CA) and digital 3D radiographs (cone beam computed tomography) were performed at the first consultation of 1 exemplary patient. With image processing techniques, the intraoral surface data, acquired using an intraoral scanner, and radiologic 3D data were fused. The virtual implant planning process (using virtual library teeth) and the in-office production of the implant drilling guide was performed after only 1 clinical consultation of the patient. Implant surgery with a computer-aided design and manufacturing produced implant drilling guide was performed during the second consultation. The production of a scan prosthesis and multiple preoperative consultations of the patient were unnecessary. The presented procedure offers another step in facilitating the production of drilling guides in dental implantology. Four main advantages are realized with this procedure. First, no additional scan prosthesis is needed. Second, data acquisition can be performed during the first consultation. Third, the virtual planning is directly transferred to the drilling guide without a loss of accuracy. Finally, the treatment cost and time required are reduced with this facilitated production process. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Three-dimensional temporomandibular joint modeling and animation.
Cascone, Piero; Rinaldi, Fabrizio; Pagnoni, Mario; Marianetti, Tito Matteo; Tedaldi, Massimiliano
2008-11-01
The three-dimensional (3D) temporomandibular joint (TMJ) model derives from a study of the cranium by 3D virtual reality and mandibular function animation. The starting point of the project is high-fidelity digital acquisition of a human dry skull. The cooperation between the maxillofacial surgeon and the cartoonist enables the reconstruction of the fibroconnective components of the TMJ that are the keystone for comprehension of the anatomic and functional features of the mandible. The skeletal model is customized with the apposition of the temporomandibular ligament, the articular disk, the retrodiskal tissue, and the medial and the lateral ligament of the disk. The simulation of TMJ movement is the result of the integration of up-to-date data on the biomechanical restrictions. The 3D TMJ model is an easy-to-use application that may be run on a personal computer for the study of the TMJ and its biomechanics.
Dental impressions using 3D digital scanners: virtual becomes reality.
Birnbaum, Nathan S; Aaronson, Heidi B
2008-10-01
The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.
Vectors in Use in a 3D Juggling Game Simulation
ERIC Educational Resources Information Center
Kynigos, Chronis; Latsi, Maria
2006-01-01
The new representations enabled by the educational computer game the "Juggler" can place vectors in a central role both for controlling and measuring the behaviours of objects in a virtual environment simulating motion in three-dimensional spaces. The mathematical meanings constructed by 13 year-old students in relation to vectors as…
Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images
NASA Astrophysics Data System (ADS)
Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka
2006-03-01
We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.
NASA Astrophysics Data System (ADS)
Stark, David; Yin, Lin; Albright, Brian; Guo, Fan
2017-10-01
The often cost-prohibitive nature of three-dimensional (3D) kinetic simulations of laser-plasma interactions has resulted in heavy use of two-dimensional (2D) simulations to extract physics. However, depending on whether the polarization is modeled as 2D-S or 2D-P (laser polarization in and out of the simulation plane, respectively), different results arise. In laser-ion acceleration in the transparency regime, VPIC particle-in-cell simulations show that 2D-S and 2D-P capture different physics that appears in 3D simulations. The electron momentum distribution is virtually two-dimensional in 2D-P, unlike the more isotropic distributions in 2D-S and 3D, leading to greater heating in the simulation plane. As a result, target expansion time scales and density thresholds for the onset of relativistic transparency differ dramatically between 2D-S and 2D-P. The artificial electron heating in 2D-P exaggerates the effectiveness of target-normal sheath acceleration (TNSA) into its dominant acceleration mechanism, whereas 2D-S and 3D both have populations accelerated preferentially during transparency to higher energies than those of TNSA. Funded by the LANL Directed Research and Development Program.
[Subjective sensations indicating simulator sickness and fatigue after exposure to virtual reality].
Malińska, Marzena; Zuzewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej
2014-01-01
The study assessed the incidence and intensity of subjective symptoms indicating simulator sickness among the persons with no inclination to motion sickness, immersed in virtual reality (VR) by watching an hour long movie in the stereoscopic (three-dimensional - 3D) and non-stereoscopic (two-dimensional - 2D) versions and after an hour long training using virtual reality, called sVR. The sample comprised 20 healthy young men with no inclination to motion sickness. The participants' subjective sensations, indicating symptoms of simulator sickness were assessed using the questionnaire completed by the participants immediately, 20 min and 24 h following the test. Grandjean's scale was used to assess fatigue and mood. The symptoms were observed immediately after the exposure to sVR. Their intensity was higher than after watching the 2D and 3D movies. A significant relationship was found between the eye pain and the type of exposure (2D, 3D and sVR) (Chi2)(2) = 6.225, p < or = 0.05); the relationship between excessive perspiration and the exposure to 31) movie and sVR was also noted (Chi2(1) = 9.173, p < or = 0.01). Some symptoms were still observed 20 min after exposure to sVR. The comparison of Grandjean's scale results before and after the training in sVR handing showed significant differences in 11 out of 14 subscales. Before and after exposure to 3D movie, the differences were significant only for the "tired-fatigued" subscale (Z = 2.501, p < or = 0.012) in favor of "fatigued". Based on the subjective sensation of discomfort after watching 2D and 3D movies it is impossible to predict symptoms of simulator sickness after training using sVR.
Superimposition of 3-dimensional cone-beam computed tomography models of growing patients
Cevidanes, Lucia H. C.; Heymann, Gavin; Cornelis, Marie A.; DeClerck, Hugo J.; Tulloch, J. F. Camilla
2009-01-01
Introduction The objective of this study was to evaluate a new method for superimposition of 3-dimensional (3D) models of growing subjects. Methods Cone-beam computed tomography scans were taken before and after Class III malocclusion orthopedic treatment with miniplates. Three observers independently constructed 18 3D virtual surface models from cone-beam computed tomography scans of 3 patients. Separate 3D models were constructed for soft-tissue, cranial base, maxillary, and mandibular surfaces. The anterior cranial fossa was used to register the 3D models of before and after treatment (about 1 year of follow-up). Results Three-dimensional overlays of superimposed models and 3D color-coded displacement maps allowed visual and quantitative assessment of growth and treatment changes. The range of interobserver errors for each anatomic region was 0.4 mm for the zygomatic process of maxilla, chin, condyles, posterior border of the rami, and lower border of the mandible, and 0.5 mm for the anterior maxilla soft-tissue upper lip. Conclusions Our results suggest that this method is a valid and reproducible assessment of treatment outcomes for growing subjects. This technique can be used to identify maxillary and mandibular positional changes and bone remodeling relative to the anterior cranial fossa. PMID:19577154
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Exploring Approaches to Teaching in Three-Dimensional Virtual Worlds
ERIC Educational Resources Information Center
Englund, Claire
2017-01-01
Purpose: The purpose of this paper is to explore how teachers' approaches to teaching and conceptions of teaching and learning with educational technology influence the implementation of three-dimensional virtual worlds (3DVWs) in health care education. Design/methodology/approach: Data were collected through thematic interviews with eight…
Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method
ERIC Educational Resources Information Center
Dan, A.; Reiner, M.
2018-01-01
Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…
Three-Dimensional Modeling of Fracture Clusters in Geothermal Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghassemi, Ahmad
The objective of this is to develop a 3-D numerical model for simulating mode I, II, and III (tensile, shear, and out-of-plane) propagation of multiple fractures and fracture clusters to accurately predict geothermal reservoir stimulation using the virtual multi-dimensional internal bond (VMIB). Effective development of enhanced geothermal systems can significantly benefit from improved modeling of hydraulic fracturing. In geothermal reservoirs, where the temperature can reach or exceed 350oC, thermal and poro-mechanical processes play an important role in fracture initiation and propagation. In this project hydraulic fracturing of hot subsurface rock mass will be numerically modeled by extending the virtual multiplemore » internal bond theory and implementing it in a finite element code, WARP3D, a three-dimensional finite element code for solid mechanics. The new constitutive model along with the poro-thermoelastic computational algorithms will allow modeling the initiation and propagation of clusters of fractures, and extension of pre-existing fractures. The work will enable the industry to realistically model stimulation of geothermal reservoirs. The project addresses the Geothermal Technologies Office objective of accurately predicting geothermal reservoir stimulation (GTO technology priority item). The project goal will be attained by: (i) development of the VMIB method for application to 3D analysis of fracture clusters; (ii) development of poro- and thermoelastic material sub-routines for use in 3D finite element code WARP3D; (iii) implementation of VMIB and the new material routines in WARP3D to enable simulation of clusters of fractures while accounting for the effects of the pore pressure, thermal stress and inelastic deformation; (iv) simulation of 3D fracture propagation and coalescence and formation of clusters, and comparison with laboratory compression tests; and (v) application of the model to interpretation of injection experiments (planned by our industrial partner) with reference to the impact of the variations in injection rate and temperature, rock properties, and in-situ stress.« less
Kim, J; Lee, C; Chong, Y
2009-01-01
Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates.
Resnick, C M; Dang, R R; Glick, S J; Padwa, B L
2017-03-01
Three-dimensional (3D) soft tissue prediction is replacing two-dimensional analysis in planning for orthognathic surgery. The accuracy of different computational models to predict soft tissue changes in 3D, however, is unclear. A retrospective pilot study was implemented to assess the accuracy of Dolphin 3D software in making these predictions. Seven patients who had a single-segment Le Fort I osteotomy and had preoperative (T 0 ) and >6-month postoperative (T 1 ) cone beam computed tomography (CBCT) scans and 3D photographs were included. The actual skeletal change was determined by subtracting the T 0 from the T 1 CBCT. 3D photographs were overlaid onto the T 0 CBCT and virtual skeletal movements equivalent to the achieved repositioning were applied using Dolphin 3D planner. A 3D soft tissue prediction (T P ) was generated and differences between the T P and T 1 images (error) were measured at 14 points and at the nasolabial angle. A mean linear prediction error of 2.91±2.16mm was found. The mean error at the nasolabial angle was 8.1±5.6°. In conclusion, the ability to accurately predict 3D soft tissue changes after Le Fort I osteotomy using Dolphin 3D software is limited. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
3D geospatial visualizations: Animation and motion effects on spatial objects
NASA Astrophysics Data System (ADS)
Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos
2018-02-01
Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.
Bogers, Hein; Rifouna, Maria S; Koning, Anton H J; Husen-Ebbinge, Margreet; Go, Attie T J I; van der Spek, Peter J; Steegers-Theunissen, Régine P M; Steegers, Eric A P; Exalto, Niek
2018-05-01
Early detection of fetal sex is becoming more popular. The aim of this study was to evaluate the accuracy of fetal sex determination in the first trimester, using 3D virtual reality. Three-dimensional (3D) US volumes were obtained in 112 pregnancies between 9 and 13 weeks of gestational age. They were offline projected as a hologram in the BARCO I-Space and subsequently the genital tubercle angle was measured. Separately, the 3D US aspect of the genitalia was examined for having a male or female appearance. Although a significant difference in genital tubercle angles was found between male and female fetuses, it did not result in a reliable prediction of fetal gender. Correct sex prediction based on first trimester genital appearance was at best 56%. Our results indicate that accurate determination of the fetal sex in the first trimester of pregnancy is not possible, even using an advanced 3D US technique. © 2017 Wiley Periodicals, Inc.
An efficient 3D R-tree spatial index method for virtual geographic environments
NASA Astrophysics Data System (ADS)
Zhu, Qing; Gong, Jun; Zhang, Yeting
A three-dimensional (3D) spatial index is required for real time applications of integrated organization and management in virtual geographic environments of above ground, underground, indoor and outdoor objects. Being one of the most promising methods, the R-tree spatial index has been paid increasing attention in 3D geospatial database management. Since the existing R-tree methods are usually limited by their weakness of low efficiency, due to the critical overlap of sibling nodes and the uneven size of nodes, this paper introduces the k-means clustering method and employs the 3D overlap volume, 3D coverage volume and the minimum bounding box shape value of nodes as the integrative grouping criteria. A new spatial cluster grouping algorithm and R-tree insertion algorithm is then proposed. Experimental analysis on comparative performance of spatial indexing shows that by the new method the overlap of R-tree sibling nodes is minimized drastically and a balance in the volumes of the nodes is maintained.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Kida, S; Masutani, Y
2014-06-01
Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D CBCT reconstruction of organs with non-periodic motion with full FOV without projection-truncation artifacts. This work was partly supported by the JSPS Core-to-Core Program(No. 23003). This work was partly supported by JSPS KAKENHI 24234567.« less
Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty
2009-08-03
replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...free space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...generates and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a
Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty
2009-08-03
replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation
Glasses-free large size high-resolution three-dimensional display based on the projector array
NASA Astrophysics Data System (ADS)
Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong
2014-11-01
Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.
Investigating Various Application Areas of Three-Dimensional Virtual Worlds for Higher Education
ERIC Educational Resources Information Center
Ghanbarzadeh, Reza; Ghapanchi, Amir Hossein
2018-01-01
Three-dimensional virtual world (3DVW) have been adopted extensively in the education sector worldwide, and there has been remarkable growth in the application of these environments for distance learning. A wide variety of universities and educational organizations across the world have utilized this technology for their regular learning and…
Jansen, Jesper; Schreurs, Ruud; Dubois, Leander; Maal, Thomas J J; Gooris, Peter J J; Becking, Alfred G
2018-04-01
Advanced three-dimensional (3D) diagnostics and preoperative planning are the first steps in computer-assisted surgery (CAS). They are an integral part of the workflow, and allow the surgeon to adequately assess the fracture and to perform virtual surgery to find the optimal implant position. The goal of this study was to evaluate the accuracy and predictability of 3D diagnostics and preoperative virtual planning without intraoperative navigation in orbital reconstruction. In 10 cadaveric heads, 19 complex orbital fractures were created. First, all fractures were reconstructed without preoperative planning (control group) and at a later stage the reconstructions were repeated with the help of preoperative planning. Preformed titanium mesh plates were used for the reconstructions by two experienced oral and maxillofacial surgeons. The preoperative virtual planning was easily accessible for the surgeon during the reconstruction. Computed tomographic scans were obtained before and after creation of the orbital fractures and postoperatively. Using a paired t-test, implant positioning accuracy (translation and rotations) of both groups were evaluated by comparing the planned implant position with the position of the implant on the postoperative scan. Implant position improved significantly (P < 0.05) for translation, yaw and roll in the group with preoperative planning (Table 1). Pitch did not improve significantly (P = 0.78). The use of 3D diagnostics and preoperative planning without navigation in complex orbital wall fractures has a positive effect on implant position. This is due to a better assessment of the fracture, the possibility of virtual surgery and because the planning can be used as a virtual guide intraoperatively. The surgeon has more control in positioning the implant in relation to the rim and other bony landmarks. Copyright © 2018 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Gaming in Second Life via Scratch4SL: Engaging High School Students in Programming Courses
ERIC Educational Resources Information Center
Pellas, Nikolaos; Peroutseas, Efstratios
2016-01-01
While pedagogical and technological affordances of three-dimensional (3D) multiuser virtual worlds in various educational disciplines are largely well-known, a study about their effect on high school students' engagement in introductory programming courses is still lacking. This case study presents students' opinions about their participation in a…
Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps
ERIC Educational Resources Information Center
Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia
2008-01-01
This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping.…
Qiu, L L; Li, S; Bai, Y X
2016-06-01
To develop surgical templates for orthodontic miniscrew implantation based on cone-beam CT(CBCT)three-dimensional(3D)images and to evaluate the safety and stability of implantation guided by the templates. DICOM data obtained in patients who had CBCT scans taken were processed using Mimics software, and 3D images of teeth and maxillary bone were acquired. Meanwhile, 3D images of miniscrews were acquired using Solidworks software and processed with Mimics software. Virtual position of miniscrews was determined based on 3D images of teeth, bone, and miniscrews. 3D virtual templates were designed according to the virtual implantation plans. STL files were output and the real templates were fabricated with stereolithographic appliance(SLA). Postoperative CBCT scans were used to evaluate the implantation safety and the stability of miniscrews were investigated. All the templates were positioned accurately and kept stable throughout the implantation process. No root damage was found. The deviations were(1.73±0.65)mm at the corona, and(1.28±0.82)mm at the apex, respectively. The stability of miniscrews was fairly well. Surgical templates for miniscrew implantation could be acquired based on 3D CBCT images and fabricated with SLA. Implantation guided by these templates was safe and stable.
Virtual reality system for planning minimally invasive neurosurgery. Technical note.
Stadie, Axel Thomas; Kockro, Ralf Alfons; Reisch, Robert; Tropine, Andrei; Boor, Stephan; Stoeter, Peter; Perneczky, Axel
2008-02-01
The authors report on their experience with a 3D virtual reality system for planning minimally invasive neurosurgical procedures. Between October 2002 and April 2006, the authors used the Dextroscope (Volume Interactions, Ltd.) to plan neurosurgical procedures in 106 patients, including 100 with intracranial and 6 with spinal lesions. The planning was performed 1 to 3 days preoperatively, and in 12 cases, 3D prints of the planning procedure were taken into the operating room. A questionnaire was completed by the neurosurgeon after the planning procedure. After a short period of acclimatization, the system proved easy to operate and is currently used routinely for preoperative planning of difficult cases at the authors' institution. It was felt that working with a virtual reality multimodal model of the patient significantly improved surgical planning. The pathoanatomy in individual patients could easily be understood in great detail, enabling the authors to determine the surgical trajectory precisely and in the most minimally invasive way. The authors found the preoperative 3D model to be in high concordance with intraoperative conditions; the resulting intraoperative "déjà-vu" feeling enhanced surgical confidence. In all procedures planned with the Dextroscope, the chosen surgical strategy proved to be the correct choice. Three-dimensional virtual reality models of a patient allow quick and easy understanding of complex intracranial lesions.
3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)
1996-01-01
The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.
Verwoerd-Dikkeboom, Christine M; van Heesch, Peter N A C M; Koning, Anton H J; Galjaard, Robert-Jan H; Exalto, Niek; Steegers, Eric A P
2008-11-01
To demonstrate the use of a novel three-dimensional (3D) virtual reality (VR) system in the visualization of first trimester growth and development in a case of confined placental trisomy 16 mosaicism (CPM+16). Case report. Prospective study on first trimester growth using a 3D VR system. A 34-year-old gravida 1, para 0 was seen weekly in the first trimester for 3D ultrasound examinations. Chorionic villus sampling was performed because of an enlarged nuchal translucency (NT) measurement and low pregnancy-associated plasma protein-A levels, followed by amniocentesis. Amniocentesis revealed a CPM+16. On two-dimensional (2D) and 3D ultrasound no structural anomalies were found with normal fetal Dopplers. Growth remained below the 2.3 percentile. At 37 weeks, a female child of 2010 g (<2.5 percentile) was born. After birth, growth climbed to the 50th percentile in the first 2 months. The I-Space VR system provided information about phenotypes not obtainable by standard 2D ultrasound. In this case, the delay in growth and development could be observed very early in pregnancy. Since first trimester screening programs are still improving and becoming even more important, systems such as the I-Space open a new era for in vivo studies on the physiologic and pathologic processes involved in embryogenesis.
Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation
2011-01-01
This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054
Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.
Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo
2011-07-26
This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Virtual Surgery for Conduit Reconstruction of the Right Ventricular Outflow Tract.
Ong, Chin Siang; Loke, Yue-Hin; Opfermann, Justin; Olivieri, Laura; Vricella, Luca; Krieger, Axel; Hibino, Narutoshi
2017-05-01
Virtual surgery involves the planning and simulation of surgical reconstruction using three-dimensional (3D) modeling based upon individual patient data, augmented by simulation of planned surgical alterations including implantation of devices or grafts. Here we describe a case in which virtual cardiac surgery aided us in determining the optimal conduit size to use for the reconstruction of the right ventricular outflow tract. The patient is a young adolescent male with a history of tetralogy of Fallot with pulmonary atresia, requiring right ventricle-to-pulmonary artery (RV-PA) conduit replacement. Utilizing preoperative magnetic resonance imaging data, virtual surgery was undertaken to construct his heart in 3D and to simulate the implantation of three different sizes of RV-PA conduit (18, 20, and 22 mm). Virtual cardiac surgery allowed us to predict the ability to implant a conduit of a size that would likely remain adequate in the face of continued somatic growth and also allow for the possibility of transcatheter pulmonary valve implantation at some time in the future. Subsequently, the patient underwent uneventful conduit change surgery with implantation of a 22-mm Hancock valved conduit. As predicted, the intrathoracic space was sufficient to accommodate the relatively large conduit size without geometric distortion or sternal compression. Virtual cardiac surgery gives surgeons the ability to simulate the implantation of prostheses of different sizes in relation to the dimensions of a specific patient's own heart and thoracic cavity in 3D prior to surgery. This can be very helpful in predicting optimal conduit size, determining appropriate timing of surgery, and patient education.
Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography.
Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G; Ko, Tony; Schuman, Joel S; Kowalczyk, Andrzej; Duker, Jay S
2005-10-01
To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of approximately 2 microm compared with standard 10-microm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment.
Kasaven, C P; McIntyre, G T; Mossey, P A
2017-01-01
Our objective was to assess the accuracy of virtual and printed 3-dimensional models derived from cone-beam computed tomographic (CT) scans to measure the volume of alveolar clefts before bone grafting. Fifteen subjects with unilateral cleft lip and palate had i-CAT cone-beam CT scans recorded at 0.2mm voxel and sectioned transversely into slices 0.2mm thick using i-CAT Vision. Volumes of alveolar clefts were calculated using first a validated algorithm; secondly, commercially-available virtual 3-dimensional model software; and finally 3-dimensional printed models, which were scanned with microCT and analysed using 3-dimensional software. For inter-observer reliability, a two-way mixed model intraclass correlation coefficient (ICC) was used to evaluate the reproducibility of identification of the cranial and caudal limits of the clefts among three observers. We used a Friedman test to assess the significance of differences among the methods, and probabilities of less than 0.05 were accepted as significant. Inter-observer reliability was almost perfect (ICC=0.987). There were no significant differences among the three methods. Virtual and printed 3-dimensional models were as precise as the validated computer algorithm in the calculation of volumes of the alveolar cleft before bone grafting, but virtual 3-dimensional models were the most accurate with the smallest 95% CI and, subject to further investigation, could be a useful adjunct in clinical practice. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Plot of virtual surgery based on CT medical images
NASA Astrophysics Data System (ADS)
Song, Limei; Zhang, Chunbo
2009-10-01
Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.
Rashev, P Z; Mintchev, M P; Bowes, K L
2000-09-01
The aim of this study was to develop a novel three-dimensional (3-D) object-oriented modeling approach incorporating knowledge of the anatomy, electrophysiology, and mechanics of externally stimulated excitable gastrointestinal (GI) tissues and emphasizing the "stimulus-response" principle of extracting the modeling parameters. The modeling method used clusters of class hierarchies representing GI tissues from three perspectives: 1) anatomical; 2) electrophysiological; and 3) mechanical. We elaborated on the first four phases of the object-oriented system development life-cycle: 1) analysis; 2) design; 3) implementation; and 4) testing. Generalized cylinders were used for the implementation of 3-D tissue objects modeling the cecum, the descending colon, and the colonic circular smooth muscle tissue. The model was tested using external neural electrical tissue excitation of the descending colon with virtual implanted electrodes and the stimulating current density distributions over the modeled surfaces were calculated. Finally, the tissue deformations invoked by electrical stimulation were estimated and represented by a mesh-surface visualization technique.
Hybrid 3D printing: a game-changer in personalized cardiac medicine?
Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J
2015-12-01
Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.
Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts
ERIC Educational Resources Information Center
Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.
2005-01-01
The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…
NASA Astrophysics Data System (ADS)
Venolia, Dan S.; Williams, Lance
1990-08-01
A range of stereoscopic display technologies exist which are no more intrusive, to the user, than a pair of spectacles. Combining such a display system with sensors for the position and orientation of the user's point-of-view results in a greatly enhanced depiction of three-dimensional data. As the point of view changes, the stereo display channels are updated in real time. The face of a monitor or display screen becomes a window on a three-dimensional scene. Motion parallax naturally conveys the placement and relative depth of objects in the field of view. Most of the advantages of "head-mounted display" technology are achieved with a less cumbersome system. To derive the full benefits of stereo combined with motion parallax, both stereo channels must be updated in real time. This may limit the size and complexity of data bases which can be viewed on processors of modest resources, and restrict the use of additional three-dimensional cues, such as texture mapping, depth cueing, and hidden surface elimination. Effective use of "full 3D" may still be undertaken in a non-interactive mode. Integral composite holograms have often been advanced as a powerful 3D visualization tool. Such a hologram is typically produced from a film recording of an object on a turntable, or a computer animation of an object rotating about one axis. The individual frames of film are multiplexed, in a composite hologram, in such a way as to be indexed by viewing angle. The composite may be produced as a cylinder transparency, which provides a stereo view of the object as if enclosed within the cylinder, which can be viewed from any angle. No vertical parallax is usually provided (this would require increasing the dimensionality of the multiplexing scheme), but the three dimensional image is highly resolved and easy to view and interpret. Even a modest processor can duplicate the effect of such a precomputed display, provided sufficient memory and bus bandwidth. This paper describes the components of a stereo display system with user point-of-view tracking for interactive 3D, and a digital realization of integral composite display which we term virtual integral holography. The primary drawbacks of holographic display - film processing turnaround time, and the difficulties of displaying scenes in full color -are obviated, and motion parallax cues provide easy 3D interpretation even for users who cannot see in stereo.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N Kamel; Robinson, Larry R
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.
Boulos, Maged N Kamel; Robinson, Larry R
2009-10-22
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N.K.; Robinson, Larry R.
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Heuts, Samuel; Sardari Nia, Peyman; Maessen, Jos G
2016-01-01
For the past decades, surgeries have become more complex, due to the increasing age of the patient population referred for thoracic surgery, more complex pathology and the emergence of minimally invasive thoracic surgery. Together with the early detection of thoracic disease as a result of innovations in diagnostic possibilities and the paradigm shift to personalized medicine, preoperative planning is becoming an indispensable and crucial aspect of surgery. Several new techniques facilitating this paradigm shift have emerged. Pre-operative marking and staining of lesions are already a widely accepted method of preoperative planning in thoracic surgery. However, three-dimensional (3D) image reconstructions, virtual simulation and rapid prototyping (RP) are still in development phase. These new techniques are expected to become an important part of the standard work-up of patients undergoing thoracic surgery in the future. This review aims at graphically presenting and summarizing these new diagnostic and therapeutic tools.
NASA Astrophysics Data System (ADS)
Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming
2005-08-01
Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.
A desktop system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Filippov, S. V.
2017-03-01
Global morphometric models can be useful for earth and planetary studies. Virtual globes - programs implementing interactive three-dimensional (3D) models of planets - are increasingly used in geo- and planetary sciences. We describe the development of a desktop system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15'-gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. For two celestial bodies, we derived global digital models of several morphometric attributes, such as horizontal curvature, vertical curvature, minimal curvature, maximal curvature, and catchment area. To develop the system, we used Blender, the free open-source software for 3D modeling and visualization. First, a 3D sphere model was generated. Second, the global morphometric maps were imposed to the sphere surface as textures. Finally, the real-time 3D graphics Blender engine was used to implement rotation and zooming of the globes. The testing of the developed system demonstrated its good performance. Morphometric globes clearly represent peculiarities of planetary topography, according to the physical and mathematical sense of a particular morphometric variable.
Avalanche for shape and feature-based virtual screening with 3D alignment
NASA Astrophysics Data System (ADS)
Diller, David J.; Connell, Nancy D.; Welsh, William J.
2015-11-01
This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns.
Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu
2009-06-01
With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.
Reconstituted Three-Dimensional Interactive Imaging
NASA Technical Reports Server (NTRS)
Hamilton, Joseph; Foley, Theodore; Duncavage, Thomas; Mayes, Terrence
2010-01-01
A method combines two-dimensional images, enhancing the images as well as rendering a 3D, enhanced, interactive computer image or visual model. Any advanced compiler can be used in conjunction with any graphics library package for this method, which is intended to take digitized images and virtually stack them so that they can be interactively viewed as a set of slices. This innovation can take multiple image sources (film or digital) and create a "transparent" image with higher densities in the image being less transparent. The images are then stacked such that an apparent 3D object is created in virtual space for interactive review of the set of images. This innovation can be used with any application where 3D images are taken as slices of a larger object. These could include machines, materials for inspection, geological objects, or human scanning. Illuminous values were stacked into planes with different transparency levels of tissues. These transparency levels can use multiple energy levels, such as density of CT scans or radioactive density. A desktop computer with enough video memory to produce the image is capable of this work. The memory changes with the size and resolution of the desired images to be stacked and viewed.
Data Visualization Using Immersive Virtual Reality Tools
NASA Astrophysics Data System (ADS)
Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.
2013-01-01
The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.
Ghanbarzadeh, Reza; Ghapanchi, Amir Hossein; Blumenstein, Michael; Talaei-Khoei, Amir
2014-02-18
A three-dimensional virtual world (3DVW) is a computer-simulated electronic 3D virtual environment that users can explore, inhabit, communicate, and interact with via avatars, which are graphical representations of the users. Since the early 2000s, 3DVWs have emerged as a technology that has much to offer the health care sector. The purpose of this study was to characterize different application areas of various 3DVWs in health and medical context and categorize them into meaningful categories. This study employs a systematic literature review on the application areas of 3DVWs in health care. Our search resulted in 62 papers from five top-ranking scientific databases published from 1990 to 2013 that describe the use of 3DVWs for health care specific purposes. We noted a growth in the number of academic studies on the topic since 2006. We found a wide range of application areas for 3DVWs in health care and classified them into the following six categories: academic education, professional education, treatment, evaluation, lifestyle, and modeling. The education category, including professional and academic education, contains the largest number of papers (n=34), of which 23 are related to the academic education category and 11 to the professional education category. Nine papers are allocated to treatment category, and 8 papers have contents related to evaluation. In 4 of the papers, the authors used 3DVWs for modeling, and 3 papers targeted lifestyle purposes. The results indicate that most of the research to date has focused on education in health care. We also found that most studies were undertaken in just two countries, the United States and the United Kingdom. 3D virtual worlds present several innovative ways to carry out a wide variety of health-related activities. The big picture of application areas of 3DVWs presented in this review could be of value and offer insights to both the health care community and researchers.
Tetsworth, Kevin; Block, Steve; Glatt, Vaida
2017-01-01
3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. PMID:28220752
Tetsworth, Kevin; Block, Steve; Glatt, Vaida
2017-01-01
3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. © The Authors, published by EDP Sciences, 2017.
Analytical 3D views and virtual globes — scientific results in a familiar spatial context
NASA Astrophysics Data System (ADS)
Tiede, Dirk; Lang, Stefan
In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.
3D chromosome rendering from Hi-C data using virtual reality
NASA Astrophysics Data System (ADS)
Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing
2015-01-01
Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.
Kurz, Sascha; Pieroh, Philipp; Lenk, Maximilian; Josten, Christoph; Böhme, Jörg
2017-01-01
Abstract Rationale: Pelvic malunion is a rare complication and is technically challenging to correct owing to the complex three-dimensional (3D) geometry of the pelvic girdle. Hence, precise preoperative planning is required to ensure appropriate correction. Reconstructive surgery is generally a 2- or 3-stage procedure, with transiliac osteotomy serving as an alternative to address limb length discrepancy. Patient concerns: A 38-year-old female patient with a Mears type IV pelvic malunion with previous failed reconstructive surgery was admitted to our department due to progressive immobilization, increasing pain especially at the posterior pelvic arch and a leg length discrepancy. The leg discrepancy was approximately 4 cm and rotation of the right hip joint was associated with pain. Diagnosis: Radiography and computer tomography (CT) revealed a hypertrophic malunion at the site of the previous posterior osteotomy (Mears type IV) involving the anterior and middle column, according to the 3-column concept, as well as malunion of the left anterior arch (Mears type IV). Interventions: The surgery was planned virtually via 3D reconstruction, using the patient's CT, and subsequently performed via transiliac osteotomy and symphysiotomy. Finite element method (FEM) was used to plan the osteotomy and osteosynthesis as to include an estimation of the risk of implant failure. Outcomes: There was not incidence of neurological injury or infection, and the remaining leg length discrepancy was ≤ 2 cm. The patient recovered independent, pain free, mobility. Virtual 3D planning provided a more precise measurement of correction parameters than radiographic-based measurements. FEM analysis identified the highest risk for implant failure at the symphyseal plate osteosynthesis and the parasymphyseal screws. No implant failure was observed. Lessons: Transiliac osteotomy, with additional osteotomy or symphysiotomy, was a suitable surgical procedure for the correction of pelvic malunion and provided adequate correction of leg length discrepancy. Virtual 3D planning enabled precise determination of correction parameters, with FEM analysis providing an appropriate method to predict areas of implant failure. PMID:29049196
Samothrakis, S; Arvanitis, T N; Plataniotis, A; McNeill, M D; Lister, P F
1997-11-01
Virtual Reality Modelling Language (VRML) is the start of a new era for medicine and the World Wide Web (WWW). Scientists can use VRML across the Internet to explore new three-dimensional (3D) worlds, share concepts and collaborate together in a virtual environment. VRML enables the generation of virtual environments through the use of geometric, spatial and colour data structures to represent 3D objects and scenes. In medicine, researchers often want to interact with scientific data, which in several instances may also be dynamic (e.g. MRI data). This data is often very large and is difficult to visualise. A 3D graphical representation can make the information contained in such large data sets more understandable and easier to interpret. Fast networks and satellites can reliably transfer large data sets from computer to computer. This has led to the adoption of remote tale-working in many applications including medical applications. Radiology experts, for example, can view and inspect in near real-time a 3D data set acquired from a patient who is in another part of the world. Such technology is destined to improve the quality of life for many people. This paper introduces VRML (including some technical details) and discusses the advantages of VRML in application developing.
Bimanual Interaction with Interscopic Multi-Touch Surfaces
NASA Astrophysics Data System (ADS)
Schöning, Johannes; Steinicke, Frank; Krüger, Antonio; Hinrichs, Klaus; Valkov, Dimitar
Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.
Philip, Armelle; Meyssonnier, Jacques; Kluender, Rafael T.; Baruchel, José
2013-01-01
Rocking curve imaging (RCI) is a quantitative version of monochromatic beam diffraction topography that involves using a two-dimensional detector, each pixel of which records its own ‘local’ rocking curve. From these local rocking curves one can reconstruct maps of particularly relevant quantities (e.g. integrated intensity, angular position of the centre of gravity, FWHM). Up to now RCI images have been exploited in the reflection case, giving a quantitative picture of the features present in a several-micrometre-thick subsurface layer. Recently, a three-dimensional Bragg diffraction imaging technique, which combines RCI with ‘pinhole’ and ‘section’ diffraction topography in the transmission case, was implemented. It allows three-dimensional images of defects to be obtained and measurement of three-dimensional distortions within a 50 × 50 × 50 µm elementary volume inside the crystal with angular misorientations down to 10−5–10−6 rad. In the present paper, this three-dimensional-RCI (3D-RCI) technique is used to study one of the grains of a three-grained ice polycrystal. The inception of the deformation process is followed by reconstructing virtual slices in the crystal bulk. 3D-RCI capabilities allow the effective distortion in the bulk of the crystal to be investigated, and the predictions of diffraction theories to be checked, well beyond what has been possible up to now. PMID:24046486
Philip, Armelle; Meyssonnier, Jacques; Kluender, Rafael T; Baruchel, José
2013-08-01
Rocking curve imaging (RCI) is a quantitative version of monochromatic beam diffraction topography that involves using a two-dimensional detector, each pixel of which records its own 'local' rocking curve. From these local rocking curves one can reconstruct maps of particularly relevant quantities ( e.g. integrated intensity, angular position of the centre of gravity, FWHM). Up to now RCI images have been exploited in the reflection case, giving a quantitative picture of the features present in a several-micrometre-thick subsurface layer. Recently, a three-dimensional Bragg diffraction imaging technique, which combines RCI with 'pinhole' and 'section' diffraction topography in the transmission case, was implemented. It allows three-dimensional images of defects to be obtained and measurement of three-dimensional distortions within a 50 × 50 × 50 µm elementary volume inside the crystal with angular misorientations down to 10 -5 -10 -6 rad. In the present paper, this three-dimensional-RCI (3D-RCI) technique is used to study one of the grains of a three-grained ice polycrystal. The inception of the deformation process is followed by reconstructing virtual slices in the crystal bulk. 3D-RCI capabilities allow the effective distortion in the bulk of the crystal to be investigated, and the predictions of diffraction theories to be checked, well beyond what has been possible up to now.
AR Based App for Tourist Attraction in ESKİ ÇARŞI (Safranbolu)
NASA Astrophysics Data System (ADS)
Polat, Merve; Rakıp Karaş, İsmail; Kahraman, İdris; Alizadehashrafi, Behnam
2016-10-01
This research is dealing with 3D modeling of historical and heritage landmarks of Safranbolu that are registered by UNESCO. This is an Augmented Reality (AR) based project in order to trigger virtual three-dimensional (3D) models, cultural music, historical photos, artistic features and animated text information. The aim is to propose a GIS-based approach with these features and add to the system as attribute data in a relational database. The database will be available in an AR-based application to provide information for the tourists.
gWEGA: GPU-accelerated WEGA for molecular superposition and shape comparison.
Yan, Xin; Li, Jiabo; Gu, Qiong; Xu, Jun
2014-06-05
Virtual screening of a large chemical library for drug lead identification requires searching/superimposing a large number of three-dimensional (3D) chemical structures. This article reports a graphic processing unit (GPU)-accelerated weighted Gaussian algorithm (gWEGA) that expedites shape or shape-feature similarity score-based virtual screening. With 86 GPU nodes (each node has one GPU card), gWEGA can screen 110 million conformations derived from an entire ZINC drug-like database with diverse antidiabetic agents as query structures within 2 s (i.e., screening more than 55 million conformations per second). The rapid screening speed was accomplished through the massive parallelization on multiple GPU nodes and rapid prescreening of 3D structures (based on their shape descriptors and pharmacophore feature compositions). Copyright © 2014 Wiley Periodicals, Inc.
Zhang, Sheng; Zhang, Kairui; Wang, Yimin; Feng, Wei; Wang, Bowei; Yu, Bin
2013-01-01
The aim of this study was to use three-dimensional (3D) computational modeling to compare the geometric fitness of these two kinds of proximal femoral intramedullary nails in the Chinese femurs. Computed tomography (CT) scans of a total of 120 normal adult Chinese cadaveric femurs were collected for analysis. With the three-dimensional (3D) computational technology, the anatomical fitness between the nail and bone was quantified according to the impingement incidence, maximum thicknesses and lengths by which the nail was protruding into the cortex in the virtual bone model, respectively, at the proximal, middle, and distal portions of the implant in the femur. The results showed that PFNA-II may fit better for the Chinese proximal femurs than InterTan, and the distal portion of InterTan may perform better than that of PFNA-II; the anatomic fitness of both nails for Chinese patients may not be very satisfactory. As a result, both implants need further modifications to meet the needs of the Chinese population.
Virtual Reality Simulation of the Effects of Microgravity in Gastrointestinal Physiology
NASA Technical Reports Server (NTRS)
Compadre, Cesar M.
1998-01-01
The ultimate goal of this research is to create an anatomically accurate three-dimensional (3D) simulation model of the effects of microgravity in gastrointestinal physiology and to explore the role that such changes may have in the pharmacokinetics of drugs given to the space crews for prevention or therapy. To accomplish this goal the specific aims of this research are: 1) To generate a complete 3-D reconstructions of the human GastroIntestinal (GI) tract of the male and female Visible Humans. 2) To develop and implement time-dependent computer algorithms to simulate the GI motility using the above 3-D reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, K; Yuasa, Y; Shiinoki, T
Purpose: A commercially available bolus (commercial-bolus) would not completely contact with the irregular shape of a patient’s skin. The purposes of this study were to customize a patient specific three-dimensional (3D) bolus using a 3D printer (3D-bolus) and to evaluate its clinical feasibility for photon radiotherapy. Methods: The 3D-bolus was designed using a treatment planning system (TPS) in DICOM-RT format. To print the 3D bolus, the file was converted into stereolithography format. To evaluate its physical characteristics, plans were created for water equivalent phantoms without the bolus, with the 3D-bolus printed in a flat form, and with the virtual bolusmore » which supposed a commercial-bolus. These plans were compared with the percent depth dose (PDD) measured from the TPS. Furthermore, to evaluate its clinical feasibility, the treatment plans were created for RANDO phantoms without the bolus and with the 3D-bolus which was customized for contacting with the surface of the phantom. Both plans were compared with the dose volume histogram (DVH) of the target volume. Results: In the physical evaluation, dmax of the plan without the bolus, with the 3D-bolus, and with the virtual bolus were 2.2 cm, 1.6 cm, and 1.7 cm, respectively. In the evaluation of clinical feasibility, for the plan without the bolus, Dmax, Dmin, Dmean, D90%, and V90% of the target volume were 102.6 %, 1.6 %, 88.8 %, 57.2 %, and 69.3 %, respectively. By using the 3D-bolus, the prescription dose could be delivered to at least 90 % of the target volume, Dmax, Dmin, Dmean, D90%, and V90% of the target volume were 104.3 %, 91.6 %, 92.1 %, 91.7 %, and 98.0 %, respectively. The 3D-bolus has the potential to be useful for providing effective dose coverage in the buildup region. Conclusion: A 3D-bolus produced using 3D printing technique is comparable to a commercially available bolus.« less
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Fang, Ting Yun; Finocchi, Rodolfo; Boctor, Emad M.
2017-03-01
Three dimensional (3D) ultrasound imaging is becoming a standard mode for medical ultrasound diagnoses. Conventional 3D ultrasound imaging is mostly scanned either by using a two dimensional matrix array or by motorizing a one dimensional array in the elevation direction. However, the former system is not widely assessable due to its cost, and the latter one has limited resolution and field-of-view in the elevation axis. Here, we propose a 3D ultrasound imaging system based on the synthetic tracked aperture approach, in which a robotic arm is used to provide accurate tracking and motion. While the ultrasound probe is moved by a robotic arm, each probe position is tracked and can be used to reconstruct a wider field-of-view as there are no physical barriers that restrict the elevational scanning. At the same time, synthetic aperture beamforming provides a better resolution in the elevation axis. To synthesize the elevational information, the single focal point is regarded as the virtual element, and forward and backward delay-andsum are applied to the radio-frequency (RF) data collected through the volume. The concept is experimentally validated using a general ultrasound phantom, and the elevational resolution improvement of 2.54 and 2.13 times was measured at the target depths of 20 mm and 110 mm, respectively.
NASA Astrophysics Data System (ADS)
Valencia, J.; Muñoz-Nieto, A.; Rodriguez-Gonzalvez, P.
2015-02-01
3D virtual modeling, visualization, dissemination and management of urban areas is one of the most exciting challenges that must face geomatics in the coming years. This paper aims to review, compare and analyze the new technologies, policies and software tools that are in progress to manage urban 3D information. It is assumed that the third dimension increases the quality of the model provided, allowing new approaches to urban planning, conservation and management of architectural and archaeological areas. Despite the fact that displaying 3D urban environments is an issue nowadays solved, there are some challenges to be faced by geomatics in the coming future. Displaying georeferenced linked information would be considered the first challenge. Another challenge to face is to improve the technical requirements if this georeferenced information must be shown in real time. Are there available software tools ready for this challenge? Are they useful to provide services required in smart cities? Throughout this paper, many practical examples that require 3D georeferenced information and linked data will be shown. Computer advances related to 3D spatial databases and software that are being developed to convert rendering virtual environment to a new enriched environment with linked information will be also analyzed. Finally, different standards that Open Geospatial Consortium has assumed and developed regarding the three-dimensional geographic information will be reviewed. Particular emphasis will be devoted on KML, LandXML, CityGML and the new IndoorGML.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Virtual Sonography Through the Internet: Volume Compression Issues
Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde
2001-01-01
Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963
The development, assessment and validation of virtual reality for human anatomy instruction
NASA Technical Reports Server (NTRS)
Marshall, Karen Benn
1996-01-01
This research project seeks to meet the objective of science training by developing, assessing, validating and utilizing VR as a human anatomy training medium. Current anatomy instruction is primarily in the form of lectures and usage of textbooks. In ideal situations, anatomic models, computer-based instruction, and cadaver dissection are utilized to augment traditional methods of instruction. At many institutions, lack of financial resources limits anatomy instruction to textbooks and lectures. However, human anatomy is three-dimensional, unlike the one-dimensional depiction found in textbooks and the two-dimensional depiction found on the computer. Virtual reality allows one to step through the computer screen into a 3-D artificial world. The primary objective of this project is to produce a virtual reality application of the abdominopelvic region of a human cadaver that can be taken back to the classroom. The hypothesis is that an immersive learning environment affords quicker anatomic recognition and orientation and a greater level of retention in human anatomy instruction. The goal is to augment not replace traditional modes of instruction.
ZIP3D: An elastic and elastic-plastic finite-element analysis program for cracked bodies
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Newman, J. C., Jr.
1990-01-01
ZIP3D is an elastic and an elastic-plastic finite element program to analyze cracks in three dimensional solids. The program may also be used to analyze uncracked bodies or multi-body problems involving contacting surfaces. For crack problems, the program has several unique features including the calculation of mixed-mode strain energy release rates using the three dimensional virtual crack closure technique, the calculation of the J integral using the equivalent domain integral method, the capability to extend the crack front under monotonic or cyclic loading, and the capability to close or open the crack surfaces during cyclic loading. The theories behind the various aspects of the program are explained briefly. Line-by-line data preparation is presented. Input data and results for an elastic analysis of a surface crack in a plate and for an elastic-plastic analysis of a single-edge-crack-tension specimen are also presented.
Wang, Jing; Qiao, Chunxia; Xiao, He; Lin, Zhou; Li, Yan; Zhang, Jiyan; Shen, Beifen; Fu, Tinghuan; Feng, Jiannan
2016-01-01
According to the three-dimensional (3D) complex structure of (hIL-6⋅hIL-6R⋅gp 130) 2 and the binding orientation of hIL-6, three compounds with high affinity to hIL-6R and bioactivity to block hIL-6 in vitro were screened theoretically from the chemical databases, including 3D-Available Chemicals Directory (ACD) and MDL Drug Data Report (MDDR), by means of the computer-guided virtual screening method. Using distance geometry, molecular modeling and molecular dynamics trajectory analysis methods, the binding mode and binding energy of the three compounds were evaluated theoretically. Enzyme-linked immunosorbent assay analysis demonstrated that all the three compounds could block IL-6 binding to IL-6R specifically. However, only compound 1 could effectively antagonize the function of hIL-6 and inhibit the proliferation of XG-7 cells in a dose-dependent manner, whereas it showed no cytotoxicity to SP2/0 or L929 cells. These data demonstrated that the compound 1 could be a promising candidate of hIL-6 antagonist.
Ray-based approach to integrated 3D visual communication
NASA Astrophysics Data System (ADS)
Naemura, Takeshi; Harashima, Hiroshi
2001-02-01
For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Crossingham, Jodi L; Jenkinson, Jodie; Woolridge, Nick; Gallinger, Steven; Tait, Gordon A; Moulton, Carol-Anne E
2009-01-01
Background: Given the increasing number of indications for liver surgery and the growing complexity of operations, many trainees in surgical, imaging and related subspecialties require a good working knowledge of the complex intrahepatic anatomy. Computed tomography (CT), the most commonly used liver imaging modality, enhances our understanding of liver anatomy, but comprises a two-dimensional (2D) representation of a complex 3D organ. It is challenging for trainees to acquire the necessary skills for converting these 2D images into 3D mental reconstructions because learning opportunities are limited and internal hepatic anatomy is complicated, asymmetrical and variable. We have created a website that uses interactive 3D models of the liver to assist trainees in understanding the complex spatial anatomy of the liver and to help them create a 3D mental interpretation of this anatomy when viewing CT scans. Methods: Computed tomography scans were imported into DICOM imaging software (OsiriX™) to obtain 3D surface renderings of the liver and its internal structures. Using these 3D renderings as a reference, 3D models of the liver surface and the intrahepatic structures, portal veins, hepatic veins, hepatic arteries and the biliary system were created using 3D modelling software (Cinema 4D™). Results: Using current best practices for creating multimedia tools, a unique, freely available, online learning resource has been developed, entitled Visual Interactive Resource for Teaching, Understanding And Learning Liver Anatomy (VIRTUAL Liver) (http://pie.med.utoronto.ca/VLiver). This website uses interactive 3D models to provide trainees with a constructive resource for learning common liver anatomy and liver segmentation, and facilitates the development of the skills required to mentally reconstruct a 3D version of this anatomy from 2D CT scans. Discussion: Although the intended audience for VIRTUAL Liver consists of residents in various medical and surgical specialties, the website will also be useful for other health care professionals (i.e. radiologists, nurses, hepatologists, radiation oncologists, family doctors) and educators because it provides a comprehensive resource for teaching liver anatomy. PMID:19816618
3D Nanofabrication Using AFM-Based Ultrasonic Vibration Assisted Nanomachining
NASA Astrophysics Data System (ADS)
Deng, Jia
Nanolithography and nanofabrication processes have significant impact on the recent development of fundamental research areas such as physics, chemistry and biology, as well as the modern electronic devices that have reached nanoscale domain such as optoelectronic devices. Many advanced nanofabrication techniques have been developed and reported to satisfy different requirements in both research areas and applications such as electron-beam lithography. However, it is expensive to use and maintain the equipment. Atomic Force Microscope (AFM) based nanolithography processes provide an alternative approach to nanopatterning with significantly lower cost. Recently, three dimensional nanostructures have attracted a lot of attention, motivated by many applications in various fields including optics, plasmonics and nanoelectromechanical systems. AFM nanolithography processes are able to create not only two dimensional nanopatterns but also have the great potential to fabricate three dimensional nanostructures. The objectives of this research proposal are to investigate the capability of AFM-based three dimensional nanofabrication processes, to transfer the three dimensional nanostructures from resists to silicon surfaces and to use the three dimensional nanostructures on silicon in applications. Based on the understanding of literature, a novel AFM-based ultrasonic vibration assisted nanomachining system is utilized to develop three dimensional nanofabrication processes. In the system, high-frequency in plane circular xy-vibration was introduced to create a virtual tool, whose diameter is controlled by the amplitude of xy-vibration and is larger than that of a regular AFM tip. Therefore, the feature width of a single trench is tunable. Ultrasonic vibration of sample in z-direction was introduced to control the depth of single trenches, creating a high-rate 3D nanomachining process. Complicated 3D nanostructures on PMMA are fabricated under both the setpoint force and z-height control modes. Complex contours and both discrete and continuous height changes are able to be fabricated by the novel 3D nanofabrication processes. Results are imaged clearly after cleaning the debris covering on the 3D nanostructures after nanomachining process. The process is validated by fabricating various 3D nanostructures. The advantages and disadvantages are compared between these two control modes. Furthermore, the 3D nanostructures were further transferred from PMMA surfaces onto silicon surfaces using reactive ion etching (RIE) process. Recipes are developed based on the functionality of the etching gas in the transfer process. Tunable selectivity and controllable surface finishes are achieved by varying the flow rate of oxygen. The developed 3D nanofabrication process is used as a novel technique in two applications, master fabrication for soft lithography and SERS substrates fabrication. 3D nanostructures were reversely molded on PDMS and then duplicated on new PMMA substrates. 3D nanostructures are fabricated, which can be either directly used or transferred on silicon as SERS substrates after coating 80 nm gold layers. They greatly enhanced the intensity of Raman scattering with the enhancement factor of 3.11x103. These applications demonstrate the capability of the novel process of AFM-based 3D nanomachining.
Virtual surgical planning in endoscopic skull base surgery.
Haerle, Stephan K; Daly, Michael J; Chan, Harley H L; Vescan, Allan; Kucharczyk, Walter; Irish, Jonathan C
2013-12-01
Skull base surgery (SBS) involves operative tasks in close proximity to critical structures in a complex three-dimensional (3D) anatomy. The aim was to investigate the value of virtual planning (VP) based on preoperative magnetic resonance imaging (MRI) for surgical planning in SBS and to compare the effects of virtual planning with 3D contours between the expert and the surgeon in training. Retrospective analysis. Twelve patients with manually segmented anatomical structures based on preoperative MRI were evaluated by eight surgeons in a randomized order using a validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Multivariate analysis revealed significant reduction of workload when using VP (P<.0001) compared to standard planning. Further, it showed that the experience level of the surgeon had a significant effect on the NASA-TLX differences (P<.05). Additional subanalysis did not reveal any significant findings regarding which type of surgeon benefits the most (P>.05). Preoperative anatomical segmentation with virtual surgical planning using contours in endoscopic SBS significantly reduces the workload for the expert and the surgeon in training. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
3D Printing of Biomolecular Models for Research and Pedagogy
Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel
2017-01-01
The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403
An Interactive Augmented Reality Implementation of Hijaiyah Alphabet for Children Education
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Akbar, F.; Syahputra, M. F.; Budiman, M. A.; Hizriadi, A.
2018-03-01
Hijaiyah alphabet is letters used in the Qur’an. An attractive and exciting learning process of Hijaiyah alphabet is necessary for the children. One of the alternatives to create attractive and interesting learning process of Hijaiyah alphabet is to develop it into a mobile application using augmented reality technology. Augmented reality is a technology that combines two-dimensional or three-dimensional virtual objects into actual three-dimensional circles and projects them in real time. The purpose of application aims to foster the children interest in learning Hijaiyah alphabet. This application is using Smartphone and marker as the medium. It was built using Unity and augmented reality library, namely Vuforia, then using Blender as the 3D object modeling software. The output generated from this research is the learning application of Hijaiyah letters using augmented reality. How to use it is as follows: first, place marker that has been registered and printed; second, the smartphone camera will track the marker. If the marker is invalid, the user should repeat the tracking process. If the marker is valid and identified, the marker will have projected the objects of Hijaiyah alphabet in three-dimensional form. Lastly, the user can learn and understand the shape and pronunciation of Hijaiyah alphabet by touching the virtual button on the marker
Reverse engineering--rapid prototyping of the skull in forensic trauma analysis.
Kettner, Mattias; Schmidt, Peter; Potente, Stefan; Ramsthaler, Frank; Schrodt, Michael
2011-07-01
Rapid prototyping (RP) comprises a variety of automated manufacturing techniques such as selective laser sintering (SLS), stereolithography, and three-dimensional printing (3DP), which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. Despite a growing demand for (virtual) reconstruction models in daily forensic casework, maceration of the skull is frequently assigned to ensure haptic evidence presentation in the courtroom. Owing to the progress in the field of forensic radiology, 3D data sets of relevant cases are usually available to the forensic expert. Here, we present a first application of RP in forensic medicine using computed tomography scans for the fabrication of an SLS skull model in a case of fatal hammer impacts to the head. The report is intended to show that this method fully respects the dignity of the deceased and is consistent with medical ethics but nevertheless provides an excellent 3D impression of anatomical structures and injuries. © 2011 American Academy of Forensic Sciences.
3D imaging, 3D printing and 3D virtual planning in endodontics.
Shah, Pratik; Chong, B S
2018-03-01
The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.
Jiang, Yizhou; Li, Sijie; Li, You; Zeng, Hang; Chen, Qi
2016-07-01
It has been documented that due to limited attentional resources, the size of the attentional focus is inversely correlated with processing efficiency. Moreover, by adopting a variety of two-dimensional size illusions induced by pictorial depth cues (e.g., the Ponzo illusion), previous studies have revealed that the perceived, rather than the retinal, size of an object determines its detection. It remains unclear, however, whether and how the retinal versus perceived size of a cue influences the process of attentional orienting to subsequent targets, and whether the corresponding influencing processes differ between two-dimensional (2-D) and three-dimensional (3-D) space. In the present study, we incorporated the dot probe paradigm with either a 2-D Ponzo illusion, induced by pictorial depth cues, or a virtual 3-D world in which the Ponzo illusion turned into visual reality. By varying the retinal size of the cue while keeping its perceived size constant (Exp. 1), we found that a cue with smaller retinal size significantly facilitated attentional orienting as compared to a cue with larger retinal size, and that the effects were comparable between 2-D and 3-D displays. Furthermore, when the pictorial background was removed and the cue display was positioned in either the farther or the closer depth plane (Exp. 2), or when both the depth and the background were removed (Exp. 3), the retinal size, rather than the depth, of the cue still affected attentional orienting. Taken together, our results suggest that the retinal size of a cue plays the crucial role in the visuospatial orienting of attention in both 2-D and 3-D.
The use of virtual reality to reimagine two-dimensional representations of three-dimensional spaces
NASA Astrophysics Data System (ADS)
Fath, Elaine
2015-03-01
A familiar realm in the world of two-dimensional art is the craft of taking a flat canvas and creating, through color, size, and perspective, the illusion of a three-dimensional space. Using well-explored tricks of logic and sight, impossible landscapes such as those by surrealists de Chirico or Salvador Dalí seem to be windows into new and incredible spaces which appear to be simultaneously feasible and utterly nonsensical. As real-time 3D imaging becomes increasingly prevalent as an artistic medium, this process takes on an additional layer of depth: no longer is two-dimensional space restricted to strategies of light, color, line and geometry to create the impression of a three-dimensional space. A digital interactive environment is a space laid out in three dimensions, allowing the user to explore impossible environments in a way that feels very real. In this project, surrealist two-dimensional art was researched and reimagined: what would stepping into a de Chirico or a Magritte look and feel like, if the depth and distance created by light and geometry were not simply single-perspective illusions, but fully formed and explorable spaces? 3D environment-building software is allowing us to step into these impossible spaces in ways that 2D representations leave us yearning for. This art project explores what we gain--and what gets left behind--when these impossible spaces become doors, rather than windows. Using sketching, Maya 3D rendering software, and the Unity Engine, surrealist art was reimagined as a fully navigable real-time digital environment. The surrealist movement and its key artists were researched for their use of color, geometry, texture, and space and how these elements contributed to their work as a whole, which often conveys feelings of unexpectedness or uneasiness. The end goal was to preserve these feelings while allowing the viewer to actively engage with the space.
Sharaf, Basel; Sabbagh, M Diya; Vijayasekaran, Aparna; Allen, Mark; Matsumoto, Jane
2018-04-30
Primary sarcomas of the sternum are extremely rare and present the surgical teams involved with unique challenges. Historically, local muscle flaps have been utilized to reconstruct the resulting defect. However, when the resulting oncologic defect is larger than anticipated, local tissues have been radiated, or when preservation of chest wall muscles is necessary to optimize function, local reconstructive options are unsuitable. Virtual surgical planning (VSP) and in house three-dimensional (3D) printing provides the platform for improved understanding of the anatomy of complex tumours, communication amongst surgeons, and meticulous pre-operative planning. We present the novel use of this technology in the multidisciplinary surgical care of a 35 year old male with primary sarcoma of the sternum. Emphasis on minimizing morbidity, maintaining function of chest wall muscles, and preservation of the internal mammary vessels for microvascular anastomosis are discussed. While the majority of patients at our institution receive local or regional flaps for reconstruction of thoracic defects, advances in microvascular surgery allow the reconstructive surgeon the latitude to choose other flap options if necessary. VSP and 3D printing allowed the surgical team involved to utilize free tissue transfer to reconstruct the defect with free tissue transfer from the thigh. Perseveration of the internal mammary vessels was paramount during tumor extirpation. Virtual surgical planning and rapid prototyping is a useful adjunct to standard imaging in complex chest wall resection and reconstruction. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Virtual Reality Enhanced Instructional Learning
ERIC Educational Resources Information Center
Nachimuthu, K.; Vijayakumari, G.
2009-01-01
Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…
Reverse Engineering and 3d Modelling for Digital Documentation of Maritime Heritage
NASA Astrophysics Data System (ADS)
Menna, F.; Nocerino, E.; Scamardella, A.
2011-09-01
heritage in general. Despite this has been stressed with emphasis, three dimensional modelling of maritime cultural heritage is still not usual as for archaeology and architecture. Three-dimensional modelling in the maritime heritage needs particular requirements. Objects to be recorded range from small replicas in maritime museums up to full-scale vessels still in operation. High geometric accuracy, photorealism of final model and faithful rendering of salient details are usually needed, together with the classical requisites characterising the 3D modelling-from-reality process, i.e. automation, low cost, reliability and flexibility of the modelling technique. In this paper, a hybrid multi-technique approach is proposed for maritime heritage preservation and, as case study, the 3D modelling of a 3-meter-long scale model of a historic warship, the "Indomito", is presented. The survey is placed in a wider project aiming to realize the virtual maritime museum of Parthenope University of Naples, for making it available to a wider public and also preserving its cultural heritage. Preliminary results are presented and discussed, highlighting relevant aspects that emerged during the experiment.
Intra-operative 3D imaging system for robot-assisted fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-01-01
Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).
Systems and Methods for Data Visualization Using Three-Dimensional Displays
NASA Technical Reports Server (NTRS)
Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)
2017-01-01
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
Zhang, Nan; Liu, Shuguang; Hu, Zhiai; Hu, Jing; Zhu, Songsong; Li, Yunfeng
2016-08-01
This study aims to evaluate the accuracy of virtual surgical planning in two-jaw orthognathic surgery via quantitative comparison of preoperative planned and postoperative actual skull models. Thirty consecutive patients who required two-jaw orthognathic surgery were included. A composite skull model was reconstructed by using Digital Imaging and Communications in Medicine (DICOM) data from spiral computed tomography (CT) and STL (stereolithography) data from surface scanning of the dental arch. LeFort I osteotomy of the maxilla and bilateral sagittal split ramus osteotomy (of the mandible were simulated by using Dolphin Imaging 11.7 Premium (Dolphin Imaging and Management Solutions, Chatsworth, CA). Genioplasty was performed, if indicated. The virtual plan was then transferred to the operation room by using three-dimensional (3-D)-printed surgical templates. Linear and angular differences between virtually simulated and postoperative skull models were evaluated. The virtual surgical planning was successfully transferred to actual surgery with the help of 3-D-printed surgical templates. All patients were satisfied with the postoperative facial profile and occlusion. The overall mean linear difference was 0.81 mm (0.71 mm for the maxilla and 0.91 mm for the mandible); and the overall mean angular difference was 0.95 degrees. Virtual surgical planning and 3-D-printed surgical templates facilitated the diagnosis, treatment planning, and accurate repositioning of bony segments in two-jaw orthognathic surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Virtual Reality as an Educational and Training Tool for Medicine.
Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo
2018-02-01
Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
The Virtual Pelvic Floor, a tele-immersive educational environment.
Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.
1999-01-01
This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378
Nakata, Norio; Suzuki, Naoki; Hattori, Asaki; Hirai, Naoya; Miyamoto, Yukio; Fukuda, Kunihiko
2012-01-01
Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1.
Augmented reality 3D display based on integral imaging
NASA Astrophysics Data System (ADS)
Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua
2017-02-01
Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.
Dobbe, J G G; Vroemen, J C; Strackee, S D; Streekstra, G J
2014-11-01
Preoperative three-dimensional planning methods have been described extensively. However, transferring the virtual plan to the patient is often challenging. In this report, we describe the management of a severely malunited distal radius fracture using a patient-specific plate for accurate spatial positioning and fixation. Twenty months postoperatively the patient shows almost painless reconstruction and a nearly normal range of motion.
Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L
2016-04-01
Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.
Nerves of Steel: a Low-Cost Method for 3D Printing the Cranial Nerves.
Javan, Ramin; Davidson, Duncan; Javan, Afshin
2017-10-01
Steady-state free precession (SSFP) magnetic resonance imaging (MRI) can demonstrate details down to the cranial nerve (CN) level. High-resolution three-dimensional (3D) visualization can now quickly be performed at the workstation. However, we are still limited by visualization on flat screens. The emerging technologies in rapid prototyping or 3D printing overcome this limitation. It comprises a variety of automated manufacturing techniques, which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. The complex neuroanatomy of the CNs may be better understood and depicted by the use of highly customizable advanced 3D printed models. In this technical note, after manually perfecting the segmentation of each CN and brain stem on each SSFP-MRI image, initial 3D reconstruction was performed. The bony skull base was also reconstructed from computed tomography (CT) data. Autodesk 3D Studio Max, available through freeware student/educator license, was used to three-dimensionally trace the 3D reconstructed CNs in order to create smooth graphically designed CNs and to assure proper fitting of the CNs into their respective neural foramina and fissures. This model was then 3D printed with polyamide through a commercial online service. Two different methods are discussed for the key segmentation and 3D reconstruction steps, by either using professional commercial software, i.e., Materialise Mimics, or utilizing a combination of the widely available software Adobe Photoshop, as well as a freeware software, OsiriX Lite.
Direct-Write 3D Nanoprinting of Plasmonic Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, Robert; Schmidt, Franz-Philipp; Karl-Franzens Univ.
During the past decade, significant progress has been made in the field of resonant optics ranging from fundamental aspects to concrete applications. And while several techniques have been introduced for the fabrication of highly defined metallic nanostructures, the synthesis of complex, free-standing three-dimensional (3D) structures is still an intriguing, but so far intractable, challenge. Here, we demonstrate a 3D direct-write synthesis approach that addresses this challenge. Specifically, we succeeded in the direct-write fabrication of 3D nanoarchitectures via electron-stimulated reactions, which are applicable on virtually any material and surface morphology. Furthermore, by that, complex 3D nanostructures composed of highly compact, puremore » gold can be fabricated, which reveal strong plasmonic activity and pave the way for a new generation of 3D nanoplasmonic architectures that can be printed on-demand.« less
Direct-Write 3D Nanoprinting of Plasmonic Structures
Winkler, Robert; Schmidt, Franz-Philipp; Karl-Franzens Univ.; ...
2016-11-23
During the past decade, significant progress has been made in the field of resonant optics ranging from fundamental aspects to concrete applications. And while several techniques have been introduced for the fabrication of highly defined metallic nanostructures, the synthesis of complex, free-standing three-dimensional (3D) structures is still an intriguing, but so far intractable, challenge. Here, we demonstrate a 3D direct-write synthesis approach that addresses this challenge. Specifically, we succeeded in the direct-write fabrication of 3D nanoarchitectures via electron-stimulated reactions, which are applicable on virtually any material and surface morphology. Furthermore, by that, complex 3D nanostructures composed of highly compact, puremore » gold can be fabricated, which reveal strong plasmonic activity and pave the way for a new generation of 3D nanoplasmonic architectures that can be printed on-demand.« less
NASA Astrophysics Data System (ADS)
Saito, A.; Takahashi, M.; Tsugawa, T.; Nishi, N.; Odagi, Y.; Yoshida, D.
2009-12-01
Three-dimensional display of the Earth is a most effective way to impress audiences how the Earth looks and make them understand the Earth is one system. There are several projects to display global data on 3D globes, such as Science on a Sphere by NOAA and Geo Cosmos by Miraikan, Japan. They have made great successes to provide audiences opportunities to learn the geoscience outputs through feeling that they are standing in front of the "real" Earth. However, those systems are too large, complicated, and expensive to be used in classrooms and local science museums. We developed an easy method to display global geoscience data in three dimensions without any complex and expensive systems. The method uses a normal PC projector, a PC and a hemispheric screen. To display the geoscience data, virtual globe software, such as Google Earth and NASA World Wind, are used. The virtual globe software makes geometry conversion. That is, the fringe areas are shrunken as it is looked from the space. Thus, when the image made by the virtual globe is projected on the hemispheric screen, it is reversely converted to its original shape on the Earth. This method does not require any specific software, projectors and polarizing glasses to make 3D presentation of the Earth. Only a hemispheric screen that can be purchased with $50 for 60cm diameter is necessary. Dagik Earth is the project that develops and demonstrates the educational programs of geoscience in classrooms and science museums using this 3D Earth presentation method. We have developed a few programs on aurora and weather system, and demonstrated them in under-graduate level classes and science museums, such as National Museum of Nature and Science,Tokyo, Shizuoka Science Center and Kyoto University Museum, since 2007. Package of hardware, geoscience data plot, and textbook have been developed to be used as short-term rental to schools and science museums. Portability, low cost and easiness of development new contents are advantages of Dagik Earth comparing to the other similar 3D systems.
A specification of 3D manipulation in virtual environments
NASA Technical Reports Server (NTRS)
Su, S. Augustine; Furuta, Richard
1994-01-01
In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.
Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei
2017-11-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Fang, C H; LauWan, Y Y; Cai, W
2017-01-01
It has been almost 10 years since digital medical technology has started to becommonly used in general surgery in China.Led by advances in three dimensional(3D) visualization technology, virtual reality, simulation surgery, and 3D printing, digital medical technology have played important roles in changing the current practice of general surgery in China to become more effective by improving diagnostic accuracy and a better choice of therapeutic procedure with a resultant increased surgical success rate and a decreased surgical risks.Furthermore, education of medical students and young doctors become better and easier.
NASA Astrophysics Data System (ADS)
Dawson, P.; Gage, J.; Takatsuka, M.; Goyette, S.
2009-02-01
To compete with other digital images, holograms must go beyond the current range of source-image types, such as sequences of photographs, laser scans, and 3D computer graphics (CG) scenes made with software designed for other applications. This project develops a set of innovative techniques for creating 3D digital content specifically for digital holograms, with virtual tools which enable the direct hand-crafting of subjects, mark by mark, analogous to Michelangelo's practice in drawing, painting and sculpture. The haptic device, the Phantom Premium 1.5 is used to draw against three-dimensional laser- scan templates of Michelangelo's sculpture placed within the holographic viewing volume.
ERIC Educational Resources Information Center
D'Alba, Adriana
2012-01-01
The main purpose of this mixed methods research was to explore and analyze visitors' overall experience while they attended a museum exhibition, and examine how this experience was affected by previously using a virtual 3dimensional representation of the museum itself. The research measured knowledge acquisition in a virtual museum, and compared…
SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport
Provost, Alden M.
2002-01-01
SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.
Application of 3D Zernike descriptors to shape-based ligand similarity searching.
Venkatraman, Vishwesh; Chakravarthy, Padmasini Ramji; Kihara, Daisuke
2009-12-17
The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.
Application of 3D Zernike descriptors to shape-based ligand similarity searching
2009-01-01
Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD. PMID:20150998
Multiple object, three-dimensional motion tracking using the Xbox Kinect sensor
NASA Astrophysics Data System (ADS)
Rosi, T.; Onorato, P.; Oss, S.
2017-11-01
In this article we discuss the capability of the Xbox Kinect sensor to acquire three-dimensional motion data of multiple objects. Two experiments regarding fundamental features of Newtonian mechanics are performed to test the tracking abilities of our setup. Particular attention is paid to check and visualise the conservation of linear momentum, angular momentum and energy. In both experiments, two objects are tracked while falling in the gravitational field. The obtained data is visualised in a 3D virtual environment to help students understand the physics behind the performed experiments. The proposed experiments were analysed with a group of university students who are aspirant physics and mathematics teachers. Their comments are presented in this paper.
Visual selective attention with virtual barriers.
Schneider, Darryl W
2017-07-01
Previous studies have shown that interference effects in the flanker task are reduced when physical barriers (e.g., hands) are placed around rather than below a target flanked by distractors. One explanation of this finding is the referential coding hypothesis, whereby the barriers serve as reference objects for allocating attention. In five experiments, the generality of the referential coding hypothesis was tested by investigating whether interference effects are modulated by the placement of virtual barriers (e.g., parentheses). Modulation of flanker interference was found only when target and distractors differed in size and the virtual barriers were beveled wood-grain objects. Under these conditions and those of previous studies, the author conjectures that an impression of depth was produced when the barriers were around the target, such that the target was perceived to be on a different depth plane than the distractors. Perception of depth in the stimulus display might have led to referential coding of the stimuli in three-dimensional (3-D) space, influencing the allocation of attention beyond the horizontal and vertical dimensions. This 3-D referential coding hypothesis is consistent with research on selective attention in 3-D space that shows flanker interference is reduced when target and distractors are separated in depth.
Memory and visual search in naturalistic 2D and 3D environments
Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.
2016-01-01
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
Heuts, Samuel; Maessen, Jos G.
2016-01-01
For the past decades, surgeries have become more complex, due to the increasing age of the patient population referred for thoracic surgery, more complex pathology and the emergence of minimally invasive thoracic surgery. Together with the early detection of thoracic disease as a result of innovations in diagnostic possibilities and the paradigm shift to personalized medicine, preoperative planning is becoming an indispensable and crucial aspect of surgery. Several new techniques facilitating this paradigm shift have emerged. Pre-operative marking and staining of lesions are already a widely accepted method of preoperative planning in thoracic surgery. However, three-dimensional (3D) image reconstructions, virtual simulation and rapid prototyping (RP) are still in development phase. These new techniques are expected to become an important part of the standard work-up of patients undergoing thoracic surgery in the future. This review aims at graphically presenting and summarizing these new diagnostic and therapeutic tools PMID:29078505
The use of 3D-printed titanium mesh tray in treating complex comminuted mandibular fractures
Ma, Junli; Ma, Limin; Wang, Zhifa; Zhu, Xiongjie; Wang, Weijian
2017-01-01
Abstract Rationale: Precise bony reduction and reconstruction of optimal contour in treating comminuted mandibular fractures is very difficult using traditional techniques and devices. The aim of this report is to introduce our experiences in using virtual surgery and three-dimensional (3D) printing technique in treating this clinical challenge. Patient concerns: A 26-year-old man presented with severe trauma in the maxillofacial area due to fall from height. Diagnosis: Computed tomography images revealed middle face fractures and comminuted mandibular fracture including bilateral condyles. Interventions and outcomes: The computed tomography data was used to construct the 3D cranio-maxillofacial models; then the displaced bone fragments were virtually reduced. On the basis of the finalized model, a customized titanium mesh tray was designed and fabricated using selective laser melting technology. During the surgery, a submandibular approach was adopted to repair the mandibular fracture. The reduction and fixation were performed according to preoperative plan, the bone defects in the mental area were reconstructed with iliac bone graft. The 3D-printed mesh tray served as an intraoperative template and carrier of bone graft. The healing process was uneventful, and the patient was satisfied with the mandible contour. Lessons: Virtual surgical planning combined with 3D printing technology enables surgeon to visualize the reduction process preoperatively and guide intraoperative reduction, making the reduction less time consuming and more precise. 3D-printed titanium mesh tray can provide more satisfactory esthetic outcomes in treating complex comminuted mandibular fractures. PMID:28682875
ERIC Educational Resources Information Center
Neubauer, Aljoscha C.; Bergner, Sabine; Schatz, Martina
2010-01-01
The well-documented sex difference in mental rotation favoring males has been shown to emerge only for 2-dimensional presentations of 3-dimensional objects, but not with actual 3-dimensional objects or with virtual reality presentations of 3-dimensional objects. Training studies using computer games with mental rotation-related content have…
Augmented reality on poster presentations, in the field and in the classroom
NASA Astrophysics Data System (ADS)
Hawemann, Friedrich; Kolawole, Folarin
2017-04-01
Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
Research on the Digital Communication and Development of Yunnan Bai Embroidery
NASA Astrophysics Data System (ADS)
Xu, Wu; Jin, Chunjie; Su, Ying; Wu, Lei; He, Jin
2017-12-01
Our country attaches great importance to the protection and development of intangible culture these days, but the shortcoming of discoloration, breakage and occupying too much space still exist in the traditional way of museum protection. This paper starts from the analysis of the above problems, and then cogitates why and how to use the virtual reality (VR) technology to better solve these problems and analyzes this specific object of the Yunnan Bai embroidery in order to achieve its full human value and economic value. Firstly, using 3D MAX to design and produce the three-dimensional model of the embroideries of Bai nationality. Secondly, using the large number of embroidery model data that we collect to construct the Yunnan Bai embroidery model database. Next, creating a digital display system of virtual embroidery and putting the digital display system to the PC client websites and mobile phone applications to achieve information sharing. Finally, through the use of virtual display technology for three-dimensional design of embroidery, the embroidery clothing, bedding and other works with modern style can be designed so as to continuously pursue and give full play to the charm and economic value of embroidery.
Immersive Visualization of the Solid Earth
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.
Optimizing Coverage of Three-Dimensional Wireless Sensor Networks by Means of Photon Mapping
2013-12-01
information if it does not display a currently valid OMB control number. 1. REPORT DATE DEC 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00...information about the monitored space is sensed?” Solving this formulation of the AGP relies upon the creation of a model describing how a set of...simulated photons will propagate in a 3D virtual environment. Furthermore, the photon model requires an efficient data structure with small memory
Chen, Hsin-Yu; Ng, Li-Shia; Chang, Chun-Shin; Lu, Ting-Chen; Chen, Ning-Hung; Chen, Zung-Chung
2017-06-01
Advances in three-dimensional imaging and three-dimensional printing technology have expanded the frontier of presurgical design for microtia reconstruction from two-dimensional curved lines to three-dimensional perspectives. This study presents an algorithm for combining three-dimensional surface imaging, computer-assisted design, and three-dimensional printing to create patient-specific auricular frameworks in unilateral microtia reconstruction. Between January of 2015 and January of 2016, six patients with unilateral microtia were enrolled. The average age of the patients was 7.6 years. A three-dimensional image of the patient's head was captured by 3dMDcranial, and virtual sculpture carried out using Geomagic Freeform software and a Touch X Haptic device for fabrication of the auricular template. Each template was tailored according to the patient's unique auricular morphology. The final construct was mirrored onto the defective side and printed out with biocompatible acrylic material. During the surgery, the prefabricated customized template served as a three-dimensional guide for surgical simulation and sculpture of the MEDPOR framework. Average follow-up was 10.3 months. Symmetric and good aesthetic results with regard to auricular shape, projection, and orientation were obtained. One case with severe implant exposure was salvaged with free temporoparietal fascia transfer and skin grafting. The combination of three-dimensional imaging and manufacturing technology with the malleability of MEDPOR has surpassed existing limitations resulting from the use of autologous materials and the ambiguity of two-dimensional planning. This approach allows surgeons to customize the auricular framework in a highly precise and sophisticated manner, taking a big step closer to the goal of mirror-image reconstruction for unilateral microtia patients. Therapeutic, IV.
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-10-01
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Ghapanchi, Amir Hossein; Blumenstein, Michael; Talaei-Khoei, Amir
2014-01-01
Background A three-dimensional virtual world (3DVW) is a computer-simulated electronic 3D virtual environment that users can explore, inhabit, communicate, and interact with via avatars, which are graphical representations of the users. Since the early 2000s, 3DVWs have emerged as a technology that has much to offer the health care sector. Objective The purpose of this study was to characterize different application areas of various 3DVWs in health and medical context and categorize them into meaningful categories. Methods This study employs a systematic literature review on the application areas of 3DVWs in health care. Our search resulted in 62 papers from five top-ranking scientific databases published from 1990 to 2013 that describe the use of 3DVWs for health care specific purposes. We noted a growth in the number of academic studies on the topic since 2006. Results We found a wide range of application areas for 3DVWs in health care and classified them into the following six categories: academic education, professional education, treatment, evaluation, lifestyle, and modeling. The education category, including professional and academic education, contains the largest number of papers (n=34), of which 23 are related to the academic education category and 11 to the professional education category. Nine papers are allocated to treatment category, and 8 papers have contents related to evaluation. In 4 of the papers, the authors used 3DVWs for modeling, and 3 papers targeted lifestyle purposes. The results indicate that most of the research to date has focused on education in health care. We also found that most studies were undertaken in just two countries, the United States and the United Kingdom. Conclusions 3D virtual worlds present several innovative ways to carry out a wide variety of health-related activities. The big picture of application areas of 3DVWs presented in this review could be of value and offer insights to both the health care community and researchers. PMID:24550130
NASA Astrophysics Data System (ADS)
Candeo, Alessia; Sana, Ilenia; Ferrari, Eleonora; Maiuri, Luigi; D'Andrea, Cosimo; Valentini, Gianluca; Bassi, Andrea
2016-05-01
Light sheet fluorescence microscopy has proven to be a powerful tool to image fixed and chemically cleared samples, providing in depth and high resolution reconstructions of intact mouse organs. We applied light sheet microscopy to image the mouse intestine. We found that large portions of the sample can be readily visualized, assessing the organ status and highlighting the presence of regions with impaired morphology. Yet, three-dimensional (3-D) sectioning of the intestine leads to a large dataset that produces unnecessary storage and processing overload. We developed a routine that extracts the relevant information from a large image stack and provides quantitative analysis of the intestine morphology. This result was achieved by a three step procedure consisting of: (1) virtually unfold the 3-D reconstruction of the intestine; (2) observe it layer-by-layer; and (3) identify distinct villi and statistically analyze multiple samples belonging to different intestinal regions. Even if the procedure has been developed for the murine intestine, most of the underlying concepts have a general applicability.
Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás
2016-01-01
Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates were very similar. In conclusion, the using of 2D environments in virtual therapy may be a more appropriate and comfortable way to perform tasks for upper limb rehabilitation of post-stroke patients, in terms of accuracy in order to effectuate optimal kinematic trajectories. PMID:27616992
Virtual reality and the unfolding of higher dimensions
NASA Astrophysics Data System (ADS)
Aguilera, Julieta C.
2006-02-01
As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.
ERIC Educational Resources Information Center
Pellas, Nikolaos; Kazanidis, Ioannis
2015-01-01
Nowadays three-dimensional (3D) multi-user virtual worlds (VWs) are the most well-known candidate platforms in Higher education. Despite the growing number of notable studies that have presented VWs as valuable platforms for the e-Education, there is still a paucity of a comparative study in order to be determined the degree of the students'…
Benazzi, Stefano; Panetta, Daniele; Fornai, Cinzia; Toussaint, Michel; Gruppioni, Giorgio; Hublin, Jean-Jacques
2014-02-01
The study of enamel thickness has received considerable attention in regard to the taxonomic, phylogenetic and dietary assessment of human and non-human primates. Recent developments based on two-dimensional (2D) and three-dimensional (3D) digital techniques have facilitated accurate analyses, preserving the original object from invasive procedures. Various digital protocols have been proposed. These include several procedures based on manual handling of the virtual models and technical shortcomings, which prevent other scholars from confidently reproducing the entire digital protocol. There is a compelling need for standard, reproducible, and well-tailored protocols for the digital analysis of 2D and 3D dental enamel thickness. In this contribution we provide essential guidelines for the digital computation of 2D and 3D enamel thickness in hominoid molars, premolars, canines and incisors. We modify previous techniques suggested for 2D analysis and we develop a new approach for 3D analysis that can also be applied to premolars and anterior teeth. For each tooth class, the cervical line should be considered as the fundamental morphological feature both to isolate the crown from the root (for 3D analysis) and to define the direction of the cross-sections (for 2D analysis). Copyright © 2013 Wiley Periodicals, Inc.
An Evaluative Review of Simulated Dynamic Smart 3d Objects
NASA Astrophysics Data System (ADS)
Romeijn, H.; Sheth, F.; Pettit, C. J.
2012-07-01
Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.
Real-time 3D visualization of volumetric video motion sensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.; Stansfield, S.; Shawver, D.
1996-11-01
This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less
Kato, A; Ohno, N
2009-03-01
The study of dental morphology is essential in terms of phylogeny. Advances in three-dimensional (3D) measurement devices have enabled us to make 3D images of teeth without destruction of samples. However, raw fundamental data on tooth shape requires complex equipment and techniques. An online database of 3D teeth models is therefore indispensable. We aimed to explore the basic methodology for constructing 3D teeth models, with application for data sharing. Geometric information on the human permanent upper left incisor was obtained using micro-computed tomography (micro-CT). Enamel, dentine, and pulp were segmented by thresholding of different gray-scale intensities. Segmented data were separately exported in STereo-Lithography Interface Format (STL). STL data were converted to Wavefront OBJ (OBJect), as many 3D computer graphics programs support the Wavefront OBJ format. Data were also applied to Quick Time Virtual Reality (QTVR) format, which allows the image to be viewed from any direction. In addition to Wavefront OBJ and QTVR data, the original CT series were provided as 16-bit Tag Image File Format (TIFF) images on the website. In conclusion, 3D teeth models were constructed in general-purpose data formats, using micro-CT and commercially available programs. Teeth models that can be used widely would benefit all those who study dental morphology.
Fast parallel 3D profilometer with DMD technology
NASA Astrophysics Data System (ADS)
Hou, Wenmei; Zhang, Yunbo
2011-12-01
Confocal microscope has been a powerful tool for three-dimensional profile analysis. Single mode confocal microscope is limited by scanning speed. This paper presents a 3D profilometer prototype of parallel confocal microscope based on DMD (Digital Micromirror Device). In this system the DMD takes the place of Nipkow Disk which is a classical parallel scanning scheme to realize parallel lateral scanning technique. Operated with certain pattern, the DMD generates a virtual pinholes array which separates the light into multi-beams. The key parameters that affect the measurement (pinhole size and the lateral scanning distance) can be configured conveniently by different patterns sent to DMD chip. To avoid disturbance between two virtual pinholes working at the same time, a scanning strategy is adopted. Depth response curve both axial and abaxial were extract. Measurement experiments have been carried out on silicon structured sample, and axial resolution of 55nm is achieved.
Licari, Daniele; Baiardi, Alberto; Biczysko, Malgorzata; Egidi, Franco; Latouche, Camille; Barone, Vincenzo
2015-02-15
This article presents the setup and implementation of a graphical user interface (VMS-Draw) for a virtual multifrequency spectrometer. Special attention is paid to ease of use, generality and robustness for a panel of spectroscopic techniques and quantum mechanical approaches. Depending on the kind of data to be analyzed, VMS-Draw produces different types of graphical representations, including two-dimensional or three-dimesional (3D) plots, bar charts, or heat maps. Among other integrated features, one may quote the convolution of stick spectra to obtain realistic line-shapes. It is also possible to analyze and visualize, together with the structure, the molecular orbitals and/or the vibrational motions of molecular systems thanks to 3D interactive tools. On these grounds, VMS-Draw could represent a useful additional tool for spectroscopic studies integrating measurements and computer simulations. Copyright © 2014 Wiley Periodicals, Inc.
Bubble behavior characteristics based on virtual binocular stereo vision
NASA Astrophysics Data System (ADS)
Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen
2018-01-01
The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.
Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Lakkas, Lampros S; Nakatani, Shimpei; Bourantas, Christos V; Ligthart, Jurgen; Onuma, Yoshinobu; Echavarria-Pinto, Mauro; Tsirka, Georgia; Kotsia, Anna; Nikas, Dimitrios N; Mogabgab, Owen; van Geuns, Robert-Jan; Naka, Katerina K; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Zijlstra, Felix; Michalis, Lampros K; Serruys, Patrick W
2014-09-01
To develop a simplified approach of virtual functional assessment of coronary stenosis from routine angiographic data and test it against fractional flow reserve using a pressure wire (wire-FFR). Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by wire-FFR (reference standard: ≤0.80). The 3D-QCA models were processed with computational fluid dynamics (CFD) to calculate the lesion-specific pressure gradient (ΔP) and construct the ΔP-flow curve, from which the virtual functional assessment index (vFAI) was derived. The discriminatory power of vFAI for ischaemia- producing lesions was high (area under the receiver operator characteristic curve [AUC]: 92% [95% CI: 86-96%]). Diagnostic accuracy, sensitivity and specificity for the optimal vFAI cut-point (≤0.82) were 88%, 90% and 86%, respectively. Virtual-FAI demonstrated superior discrimination against 3D-QCA-derived % area stenosis (AUC: 78% [95% CI: 70- 84%]; p<0.0001 compared to vFAI). There was a close correlation (r=0.78, p<0.0001) and agreement of vFAI compared to wire-FFR (mean difference: -0.0039±0.085, p=0.59). We developed a fast and simple CFD-powered virtual haemodynamic assessment model using only routine angiography and without requiring any invasive physiology measurements/hyperaemia induction. Virtual-FAI showed a high diagnostic performance and incremental value to QCA for predicting wire-FFR; this "less invasive" approach could have important implications for patient management and cost.
Turchini, John; Buckland, Michael E; Gill, Anthony J; Battye, Shane
2018-05-30
- Three-dimensional (3D) photogrammetry is a method of image-based modeling in which data points in digital images, taken from offset viewpoints, are analyzed to generate a 3D model. This modeling technique has been widely used in the context of geomorphology and artificial imagery, but has yet to be used within the realm of anatomic pathology. - To describe the application of a 3D photogrammetry system capable of producing high-quality 3D digital models and its uses in routine surgical pathology practice as well as medical education. - We modeled specimens received in the 2 participating laboratories. The capture and photogrammetry process was automated using user control software, a digital single-lens reflex camera, and digital turntable, to generate a 3D model with the output in a PDF file. - The entity demonstrated in each specimen was well demarcated and easily identified. Adjacent normal tissue could also be easily distinguished. Colors were preserved. The concave shapes of any cystic structures or normal convex rounded structures were discernable. Surgically important regions were identifiable. - Macroscopic 3D modeling of specimens can be achieved through Structure-From-Motion photogrammetry technology and can be applied quickly and easily in routine laboratory practice. There are numerous advantages to the use of 3D photogrammetry in pathology, including improved clinicopathologic correlation for the surgeon and enhanced medical education, revolutionizing the digital pathology museum with virtual reality environments and 3D-printing specimen models.
[Preliminary use of HoloLens glasses in surgery of liver cancer].
Shi, Lei; Luo, Tao; Zhang, Li; Kang, Zhongcheng; Chen, Jie; Wu, Feiyue; Luo, Jia
2018-05-28
To establish the preoperative three dimensional (3D) model of liver cancer, and to precisely match the preoperative planning with the target organs during the operation. Methods: The 3D model reconstruction based on magnetic resonance data, which was combined with virtual reality technology via HoloLens glasses, was applied in the operation of liver cancer to achieve preoperative 3D modeling and surgical planning, and to directly match it with the operative target organs during operation. Results: The 3D model reconstruction of liver cancer based on magnetic resonance data was completed. The exact match with the target organ was performed during the operation via HoloLens glasses leaded by the 3D model. Conclusion: Magnetic resonance data can be used for the 3D model reconstruction to improve preoperative assessment and accurate match during the operation.
Aubry, S; Pousse, A; Sarliève, P; Laborie, L; Delabrousse, E; Kastler, B
2006-11-01
To model vertebrae in 3D to improve radioanatomic knowledge of the spine with the vascular and nerve environment and simulate CT-guided interventions. Vertebra acquisitions were made with multidetector CT. We developed segmentation software and specific viewer software using the Delphi programming environment. This segmentation software makes it possible to model 3D high-resolution segments of vertebrae and their environment from multidetector CT acquisitions. Then the specific viewer software provides multiplanar reconstructions of the CT volume and the possibility to select different 3D objects of interest. This software package improves radiologists' radioanatomic knowledge through a new 3D anatomy presentation. Furthermore, the possibility of inserting virtual 3D objects in the volume can simulate CT-guided intervention. The first volumetric radioanatomic software has been born. Furthermore, it simulates CT-guided intervention and consequently has the potential to facilitate learning interventions using CT guidance.
Software for Building Models of 3D Objects via the Internet
NASA Technical Reports Server (NTRS)
Schramer, Tim; Jensen, Jeff
2003-01-01
The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.
Beaulieu, C F; Jeffrey, R B; Karadi, C; Paik, D S; Napel, S
1999-07-01
To determine the sensitivity of radiologist observers for detecting colonic polyps by using three different data review (display) modes for computed tomographic (CT) colonography, or "virtual colonoscopy." CT colonographic data in a patient with a normal colon were used as base data for insertion of digitally synthesized polyps. Forty such polyps (3.5, 5, 7, and 10 mm in diameter) were randomly inserted in four copies of the base data. Axial CT studies, volume-rendered virtual endoscopic movies, and studies from a three-dimensional mode termed "panoramic endoscopy" were reviewed blindly and independently by two radiologists. Detection improved with increasing polyp size. Trends in sensitivity were dependent on whether all inserted lesions or only visible lesions were considered, because modes differed in how completely the colonic surface was depicted. For both reviewers and all polyps 7 mm or larger, panoramic endoscopy resulted in significantly greater sensitivity (90%) than did virtual endoscopy (68%, P = .014). For visible lesions only, the sensitivities were 85%, 81%, and 60% for one reader and 65%, 62%, and 28% for the other for virtual endoscopy, panoramic endoscopy, and axial CT, respectively. Three-dimensional displays were more sensitive than two-dimensional displays (P < .05). The sensitivity of panoramic endoscopy is higher than that of virtual endoscopy, because the former displays more of the colonic surface. Higher sensitivities for three-dimensional displays may justify the additional computation and review time.
Virtual Jupiter - Real Learning
NASA Astrophysics Data System (ADS)
Ruzhitskaya, Lanika; Speck, A.; Laffey, J.
2010-01-01
How many earthlings went to visit Jupiter? None. How many students visited virtual Jupiter to fulfill their introductory astronomy courses’ requirements? Within next six months over 100 students from University of Missouri will get a chance to explore the planet and its Galilean Moons using a 3D virtual environment created especially for them to learn Kepler's and Newton's laws, eclipses, parallax, and other concepts in astronomy. The virtual world of Jupiter system is a unique 3D environment that allows students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The virtual learning environment let students to work individually or collaborate with their teammates. The 3D world is also a great opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of 3D environment is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3-dimensional environment.
VRLane: a desktop virtual safety management program for underground coal mine
NASA Astrophysics Data System (ADS)
Li, Mei; Chen, Jingzhu; Xiong, Wei; Zhang, Pengpeng; Wu, Daozheng
2008-10-01
VR technologies, which generate immersive, interactive, and three-dimensional (3D) environments, are seldom applied to coal mine safety work management. In this paper, a new method that combined the VR technologies with underground mine safety management system was explored. A desktop virtual safety management program for underground coal mine, called VRLane, was developed. The paper mainly concerned about the current research advance in VR, system design, key techniques and system application. Two important techniques were introduced in the paper. Firstly, an algorithm was designed and implemented, with which the 3D laneway models and equipment models can be built on the basis of the latest mine 2D drawings automatically, whereas common VR programs established 3D environment by using 3DS Max or the other 3D modeling software packages with which laneway models were built manually and laboriously. Secondly, VRLane realized system integration with underground industrial automation. VRLane not only described a realistic 3D laneway environment, but also described the status of the coal mining, with functions of displaying the run states and related parameters of equipment, per-alarming the abnormal mining events, and animating mine cars, mine workers, or long-wall shearers. The system, with advantages of cheap, dynamic, easy to maintenance, provided a useful tool for safety production management in coal mine.
Kehl, Sven; Eckert, Sven; Sütterlin, Marc; Neff, K Wolfgang; Siemer, Jörn
2011-06-01
Three-dimensional (3D) sonographic volumetry is established in gynecology and obstetrics. Assessment of the fetal lung volume by magnetic resonance imaging (MRI) in congenital diaphragmatic hernias has become a routine examination. In vitro studies have shown a good correlation between 3D sonographic measurements and MRI. The aim of this study was to compare the lung volumes of healthy fetuses assessed by 3D sonography to MRI measurements and to investigate the impact of different rotation angles. A total of 126 fetuses between 20 and 40 weeks' gestation were measured by 3D sonography, and 27 of them were also assessed by MRI. The sonographic volumes were calculated by the rotational technique (virtual organ computer-aided analysis) with rotation angles of 6° and 30°. To evaluate the accuracy of 3D sonographic volumetry, percentage error and absolute percentage error values were calculated using MRI volumes as reference points. Formulas to calculate total, right, and left fetal lung volumes according to gestational age and biometric parameters were derived by stepwise regression analysis. Three-dimensional sonographic volumetry showed a high correlation compared to MRI (6° angle, R(2) = 0.971; 30° angle, R(2) = 0.917) with no systematic error for the 6° angle. Moreover, using the 6° rotation angle, the median absolute percentage error was significantly lower compared to the 30° angle (P < .001). The new formulas to calculate total lung volume in healthy fetuses only included gestational age and no biometric parameters (R(2) = 0.853). Three-dimensional sonographic volumetry of lung volumes in healthy fetuses showed a good correlation with MRI. We recommend using an angle of 6° because it assessed the lung volume more accurately. The specifically designed equations help estimate lung volumes in healthy fetuses.
Three-Dimensional Reconstruction of Thoracic Structures: Based on Chinese Visible Human
Luo, Na; Tan, Liwen; Fang, Binji; Li, Ying; Xie, Bing; Liu, Kaijun; Chu, Chun; Li, Min
2013-01-01
We managed to establish three-dimensional digitized visible model of human thoracic structures and to provide morphological data for imaging diagnosis and thoracic and cardiovascular surgery. With Photoshop software, the contour line of lungs and mediastinal structures including heart, aorta and its ramus, azygos vein, superior vena cava, inferior vena cava, thymus, esophagus, diaphragm, phrenic nerve, vagus nerve, sympathetic trunk, thoracic vertebrae, sternum, thoracic duct, and so forth were segmented from the Chinese Visible Human (CVH)-1 data set. The contour data set of segmented thoracic structures was imported to Amira software and 3D thorax models were reconstructed via surface rendering and volume rendering. With Amira software, surface rendering reconstructed model of thoracic organs and its volume rendering reconstructed model were 3D reconstructed and can be displayed together clearly and accurately. It provides a learning tool of interpreting human thoracic anatomy and virtual thoracic and cardiovascular surgery for medical students and junior surgeons. PMID:24369489
Thomas, Thaddeus P.; Anderson, Donald D.; Willis, Andrew R.; Liu, Pengcheng; Frank, Matthew C.; Marsh, J. Lawrence; Brown, Thomas D.
2011-01-01
Reconstructing highly comminuted articular fractures poses a difficult surgical challenge, akin to solving a complicated three-dimensional (3D) puzzle. Pre-operative planning using CT is critically important, given the desirability of less invasive surgical approaches. The goal of this work is to advance 3D puzzle solving methods toward use as a pre-operative tool for reconstructing these complex fractures. Methodology for generating typical fragmentation/dispersal patterns was developed. Five identical replicas of human distal tibia anatomy, were machined from blocks of high-density polyetherurethane foam (bone fragmentation surrogate), and were fractured using an instrumented drop tower. Pre- and post-fracture geometries were obtained using laser scans and CT. A semi-automatic virtual reconstruction computer program aligned fragment native (non-fracture) surfaces to a pre-fracture template. The tibias were precisely reconstructed with alignment accuracies ranging from 0.03-0.4mm. This novel technology has potential to significantly enhance surgical techniques for reconstructing comminuted intra-articular fractures, as illustrated for a representative clinical case. PMID:20924863
Bipolar stimulation of a three-dimensional bidomain incorporating rotational anisotropy.
Muzikant, A L; Henriquez, C S
1998-04-01
A bidomain model of cardiac tissue was used to examine the effect of transmural fiber rotation during bipolar stimulation in three-dimensional (3-D) myocardium. A 3-D tissue block with unequal anisotropy and two types of fiber rotation (none and moderate) was stimulated along and across fibers via bipolar electrodes on the epicardial surface, and the resulting steady-state interstitial (phi e) and transmembrane (Vm) potentials were computed. Results demonstrate that the presence of rotated fibers does not change the amount of tissue polarized by the point surface stimuli, but does cause changes in the orientation of phi e and Vm in the depth of the tissue, away from the epicardium. Further analysis revealed a relationship between the Laplacian of phi e, regions of virtual electrodes, and fiber orientation that was dependent upon adequacy of spatial sampling and the interstitial anisotropy. These findings help to understand the role of fiber architecture during extracellular stimulation of cardiac muscle.
Evaluation of three-dimensional virtual perception of garments
NASA Astrophysics Data System (ADS)
Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.
2017-10-01
In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.
Accuracy of contacts calculated from 3D images of occlusal surfaces.
DeLong, R; Knorr, S; Anderson, G C; Hodges, J; Pintado, M R
2007-06-01
Compare occlusal contacts calculated from 3D virtual models created from clinical records to contacts identified clinically using shimstock and transillumination. Upper and lower full arch alginate impressions and vinyl polysiloxane centric interocclusal records were made of 12 subjects. Stone casts made from the alginate impressions and the interocclusal records were optically scanned. Three-dimensional virtual models of the dental arches and interocclusal records were constructed using the Virtual Dental Patient Software. Contacts calculated from the virtual interocclusal records and from the aligned upper and lower virtual arch models were compared to those identified clinically using 0.01mm shimstock and transillumination of the interocclusal record. Virtual contacts and transillumination contacts were compared by anatomical region and by contacting tooth pairs to shimstock contacts. Because there is no accepted standard for identifying occlusal contacts, methods were compared in pairs with one labeled "standard" and the second labeled "test". Accuracy was defined as the number of contacts and non-contacts of the "test" that were in agreement with the "standard" divided by the total number of contacts and non-contacts of the "standard". Accuracy of occlusal contacts calculated from virtual interocclusal records and aligned virtual casts compared to transillumination were: 0.87+/-0.05 and 0.84+/-0.06 by region and 0.95+/-0.07 and 0.95+/-0.05 by tooth, respectively. Comparisons with shimstock were: 0.85+/-0.15 (record), 0.84+/-0.14 (casts), and 81+/-17 (transillumination). The virtual record, aligned virtual arches, and transillumination methods of identifying contacts are equivalent, and show better agreement with each other than with the shimstock method.
Mesh three-dimensional arm orthosis with built-in ultrasound physiotherapy system
NASA Astrophysics Data System (ADS)
Kashapova, R. M.; Kashapov, R. N.; Kashapova, R. S.
2017-09-01
The possibility of using the built-in ultrasound physiotherapy system of the hand orthosis is explored in the work. The individual mesh orthosis from nylon 12 was manufactured by the 3D prototyping method on the installation of selective laser sintering SLS SPro 60HD. The applied technology of three-dimensional scanning made it possible to obtain a model of the patient’s hand and on the basis of it to build a virtual model of the mesh frame. In the course of the research, the developed system of ultrasound exposure was installed on the orthosis and its tests were carried out. As a result, the acceleration of the healing process and the reduction in the time of wearing orthosis were found.
Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.
Okoshi, T; Oshima, K
1976-04-01
In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.
Thali, M J; Dirnhofer, R; Becker, R; Oliver, W; Potter, K
2004-10-01
The study aimed to validate magnetic resonance microscopy (MRM) studies of forensic tissue specimens (skin samples with electric injury patterns) against the results from routine histology. Computed tomography and magnetic resonance imaging are fast becoming important tools in clinical and forensic pathology. This study is the first forensic application of MRM to the analysis of electric injury patterns in human skin. Three-dimensional high-resolution MRM images of fixed skin specimens provided a complete 3D view of the damaged tissues at the site of an electric injury as well as in neighboring tissues, consistent with histologic findings. The image intensity of the dermal layer in T2-weighted MRM images was reduced in the central zone due to carbonization or coagulation necrosis and increased in the intermediate zone because of dermal edema. A subjacent blood vessel with an intravascular occlusion supports the hypothesis that current traveled through the vascular system before arcing to ground. High-resolution imaging offers a noninvasive alternative to conventional histology in forensic wound analysis and can be used to perform 3D virtual histology.
Computer-Based Technologies in Dentistry: Types and Applications
Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh
2016-01-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819
Computer-Based Technologies in Dentistry: Types and Applications.
Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh
2016-06-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.
2009-01-01
Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951
3D medical collaboration technology to enhance emergency healthcare.
Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E
2009-04-19
Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf
2017-08-01
A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or guidance system able to read DICOM data sets, expanding the possibilities of embedded planning software.
On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.
Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir
2018-04-01
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Marescaux, J; Clément, J M; Nord, M; Russier, Y; Tassetti, V; Mutter, D; Cotin, S; Ayache, N
1997-11-01
Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world: virtual reality. Virtual reality extends the perception of our five senses by representing more than the real state of things by the means of computer sciences and robotics. It consists of three concepts: immersion, navigation and interaction. Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements: visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.
Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun
2018-07-01
This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
Introducing a Virtual Reality Experience in Anatomic Pathology Education.
Madrigal, Emilio; Prajapati, Shyam; Hernandez-Prera, Juan C
2016-10-01
A proper examination of surgical specimens is fundamental in anatomic pathology (AP) education. However, the resources available to residents may not always be suitable for efficient skill acquisition. We propose a method to enhance AP education by introducing high-definition videos featuring methods for appropriate specimen handling, viewable on two-dimensional (2D) and stereoscopic three-dimensional (3D) platforms. A stereo camera system recorded the gross processing of commonly encountered specimens. Three edited videos, with instructional audio voiceovers, were experienced by nine junior residents in a crossover study to assess the effects of the exposure (2D vs 3D movie views) on self-reported physiologic symptoms. A questionnaire was used to analyze viewer acceptance. All surveyed residents found the videos beneficial in preparation to examine a new specimen type. Viewer data suggest an improvement in specimen handling confidence and knowledge and enthusiasm toward 3D technology. None of the participants encountered significant motion sickness. Our novel method provides the foundation to create a robust teaching library. AP is inherently a visual discipline, and by building on the strengths of traditional teaching methods, our dynamic approach allows viewers to appreciate the procedural actions involved in specimen processing. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H
2013-03-01
To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Jacob, Laura Beth
2012-01-01
Virtual world environments have evolved from object-oriented, text-based online games to complex three-dimensional immersive social spaces where the lines between reality and computer-generated begin to blur. Educators use virtual worlds to create engaging three-dimensional learning spaces for students, but the impact of virtual worlds in…
Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier
2016-08-01
The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed on desktop 3D printers with good quality and accurate representation of the virtual 3D models. We recommend using BP segmentation with either MRA or bSSFP source datasets to create virtual 3D models for 3D printing. Desktop 3D printers can offer good quality printed models with accurate representation of anatomic detail.
Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.
1998-01-01
The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Okolo, Brando; Popp, Uwe
2018-01-01
Additive manufacturing (AM) is rapidly gaining acceptance in the healthcare sector. Three-dimensional (3D) virtual surgical planning, fabrication of anatomical models, and patient-specific implants (PSI) are well-established processes in the surgical fields. Polyetheretherketone (PEEK) has been used, mainly in the reconstructive surgeries as a reliable alternative to other alloplastic materials for the fabrication of PSI. Recently, it has become possible to fabricate PEEK PSI with Fused Filament Fabrication (FFF) technology. 3D printing of PEEK using FFF allows construction of almost any complex design geometry, which cannot be manufactured using other technologies. In this study, we fabricated various PEEK PSI by FFF 3D printer in an effort to check the feasibility of manufacturing PEEK with 3D printing. Based on these preliminary results, PEEK can be successfully used as an appropriate biomaterial to reconstruct the surgical defects in a “biomimetic” design. PMID:29713642
3D printing applications for transdermal drug delivery.
Economidou, Sophia N; Lamprou, Dimitrios A; Douroumis, Dennis
2018-06-15
The role of two and three-dimensional printing as a fabrication technology for sophisticated transdermal drug delivery systems is explored in literature. 3D printing encompasses a family of distinct technologies that employ a virtual model to produce a physical object through numerically controlled apparatuses. The applicability of several printing technologies has been researched for the direct or indirect printing of microneedle arrays or for the modification of their surface through drug-containing coatings. The findings of the respective studies are presented. The range of printable materials that are currently used or potentially can be employed for 3D printing of transdermal drug delivery (TDD) systems is also reviewed. Moreover, the expected impact and challenges of the adoption of 3D printing as a manufacturing technique for transdermal drug delivery systems, are assessed. Finally, this paper outlines the current regulatory framework associated with 3D printed transdermal drug delivery systems. Copyright © 2018 Elsevier B.V. All rights reserved.
The use of 3D planning in facial surgery: preliminary observations.
Hoarau, R; Zweifel, D; Simon, C; Broome, M
2014-12-01
Three-dimensional (3D) planning is becoming a more commonly used tool in maxillofacial surgery. At first used only virtually, 3D planning now also enables the creation of useful intraoperative aids such as cutting guides, which decrease the operative difficulty. In our center, we have used 3D planning in various domains of facial surgery and have investigated the advantages of this technique. We have also addressed the difficulties associated with its use. 3D planning increases the accuracy of reconstructive surgery, decreases operating time, whilst maintaining excellent esthetic results. However, its use is restricted to osseous reconstruction at this stage and once planning has been undertaken, it cannot be reversed or altered intraoperatively. Despite the attractive nature of this new tool, its uses and practicalities must be further evaluated. In particular, cost-effectiveness, hospital stay, and patient perceived benefits must be assessed. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Honigmann, Philipp; Sharma, Neha; Okolo, Brando; Popp, Uwe; Msallem, Bilal; Thieringer, Florian M
2018-01-01
Additive manufacturing (AM) is rapidly gaining acceptance in the healthcare sector. Three-dimensional (3D) virtual surgical planning, fabrication of anatomical models, and patient-specific implants (PSI) are well-established processes in the surgical fields. Polyetheretherketone (PEEK) has been used, mainly in the reconstructive surgeries as a reliable alternative to other alloplastic materials for the fabrication of PSI. Recently, it has become possible to fabricate PEEK PSI with Fused Filament Fabrication (FFF) technology. 3D printing of PEEK using FFF allows construction of almost any complex design geometry, which cannot be manufactured using other technologies. In this study, we fabricated various PEEK PSI by FFF 3D printer in an effort to check the feasibility of manufacturing PEEK with 3D printing. Based on these preliminary results, PEEK can be successfully used as an appropriate biomaterial to reconstruct the surgical defects in a "biomimetic" design.
Visualizing the process of interaction in a 3D environment
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh
2007-03-01
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
NASA Astrophysics Data System (ADS)
Kay, Paul A.; Robb, Richard A.; King, Bernard F.; Myers, R. P.; Camp, Jon J.
1995-04-01
Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.
Fast 3D NIR systems for facial measurement and lip-reading
NASA Astrophysics Data System (ADS)
Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther
2017-05-01
Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.
Research on Visualization of Ground Laser Radar Data Based on Osg
NASA Astrophysics Data System (ADS)
Huang, H.; Hu, C.; Zhang, F.; Xue, H.
2018-04-01
Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.
ERIC Educational Resources Information Center
Hartwick, Peggy
2018-01-01
This article investigates research approaches used in traditional classroom-based interaction studies for identifying a suitable research method for studies in three-dimensional virtual learning environments (3DVLEs). As opportunities for language learning and teaching in virtual worlds emerge, so too do new research questions. An understanding of…
NASA Astrophysics Data System (ADS)
Chen, Shuzhe; Huang, Liwen
the river of Yangtze River in Chongqing area is continuous curved. Hydrology and channel situation is complex, and the transportation is busy. With the increasing of shipments of hazardous chemicals year by year, oil spill accident risk is rising. So establishment of three-dimensional virtual simulation of oil spill and its application in decision-making has become an urgent task. This paper detailed the process of three-dimensional virtual simulation of oil spill and established a system of three-dimensional virtual Simulation of oil spill of Yangtze River in Chongqing area by establishing an oil spill model of the Chongqing area based on oil particles model, and the system has been used in emergency decision to provide assistance for the oil spill response.
NASA Astrophysics Data System (ADS)
Moreno-Casas, P. A.; Bombardelli, F. A.
2015-12-01
A 3D Lagrangian particle tracking model is coupled to a 3D channel velocity field to simulate the saltation motion of a single sediment particle moving in saltation mode. The turbulent field is a high-resolution three dimensional velocity field that reproduces a by-pass transition to turbulence on a flat plate due to free-stream turbulence passing above de plate. In order to reduce computational costs, a decoupled approached is used, i.e., the turbulent flow is simulated independently from the tracking model, and then used to feed the 3D Lagrangian particle model. The simulations are carried using the point-particle approach. The particle tracking model contains three sub-models, namely, particle free-flight, a post-collision velocity and bed representation sub-models. The free-flight sub-model considers the action of the following forces: submerged weight, non-linear drag, lift, virtual mass, Magnus and Basset forces. The model also includes the effect of particle angular velocity. The post-collision velocities are obtained by applying conservation of angular and linear momentum. The complete model was validated with experimental results from literature within the sand range. Results for particle velocity time series and distribution of particle turbulent intensities are presented.
A review on noise suppression and aberration compensation in holographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Rahmatullah, B.
2016-12-01
Understanding three-dimensional (3D) fluid flow behaviour is undeniably crucial in improving performance and efficiency in a wide range of applications in engineering and medical fields. Holographic particle image velocimetry (HPIV) is a potential tool to probe and characterize complex flow dynamics since it is a truly three-dimensional three-component measurement technique. The technique relies on the coherent light scattered by small seeding particles that are assumed to faithfully follow the flow for subsequent reconstruction of the same the event afterward. However, extraction of useful 3D displacement data from these particle images is usually aggravated by noise and aberration which are inherent within the optical system. Noise and aberration have been considered as major hurdles in HPIV in obtaining accurate particle image identification and its corresponding 3D position. Major contributions to noise include zero-order diffraction, out-of-focus particles, virtual image and emulsion grain scattering. Noise suppression is crucial to ensure that particle image can be distinctly differentiated from background noise while aberration compensation forms particle image with high integrity. This paper reviews a number of HPIV configurations that have been proposed to address these issues, summarizes the key findings and outlines a basis for follow-on research.
Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?
NASA Astrophysics Data System (ADS)
Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.
2013-09-01
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.
Toward a virtual reconstruction of an antique three-dimensional marble puzzle
NASA Astrophysics Data System (ADS)
Benamar, Fatima Zahra; Fauvet, Eric; Hostein, Antony; Laligant, Olivier; Truchetet, Frederic
2017-01-01
The reconstruction of broken objects is an important field of research for many applications, such as art restoration, surgery, forensics, and solving puzzles. In archaeology, the reconstruction of broken artifacts is a very time-consuming task due to the handling of fractured objects, which are generally fragile. However, it can now be supported by three-dimensional (3-D) data acquisition devices and computer processing. Those techniques are very useful in this domain because they allow the remote handling of very accurate models of fragile parts, they permit the extensive testing of reconstruction solutions, and they provide access to the parts for the entire research community. An interesting problem has recently been proposed by archaeologists in the form of a huge puzzle composed of a thousand fragments of Pentelic marble of different sizes found in Autun (France), and all attempts to reconstruct the puzzle during the last two centuries have failed. Archaeologists are sure that some fragments are missing and that some of the ones we have come from different slabs. We propose an inexpensive transportable system for 3-D acquisition setup and a 3-D reconstruction method that is applied to this Roman inscription but is also relevant to other applications.
ERIC Educational Resources Information Center
Nussli, Natalie; Oh, Kevin
2016-01-01
This case study describes how a systematic 7-Step Virtual Worlds Teacher Training Workshop guided the enculturation of 18 special education teachers into three-dimensional virtual worlds. The main purpose was to enable these teachers to make informed decisions about the usability of virtual worlds for students with social skills challenges, such…
Yee, Sophia Hui Xin; Esguerra, Roxanna Jean; Chew, Amelia Anya Qin'An; Wong, Keng Mun; Tan, Keson Beng Choon
2018-02-01
Accurate maxillomandibular relationship transfer is important for CAD/CAM prostheses. This study compared the 3D-accuracy of virtual model static articulation in three laboratory scanner-CAD systems (Ceramill Map400 [AG], inEos X5 [SIR], Scanner S600 Arti [ZKN]) using two virtual articulation methods: mounted models (MO), interocclusal record (IR). The master model simulated a single crown opposing a 3-unit fixed partial denture. Reference values were obtained by measuring interarch and interocclusal reference features with a coordinate measuring machine (CMM). MO group stone casts were articulator-mounted with acrylic resin bite registrations while IR group casts were hand-articulated with poly(vinyl siloxane) bite registrations. Five test model sets were scanned and articulated virtually with each system (6 test groups, 15 data sets). STL files of the virtual models were measured with CMM software. dR R , dR C , and dR L , represented interarch global distortions at right, central, and left sides, respectively, while dR M , dX M , dY M , and dZ M represented interocclusal global and linear distortions between preparations. Mean interarch 3D distortion ranged from -348.7 to 192.2 μm for dR R , -86.3 to 44.1 μm for dR C , and -168.1 to 4.4 μm for dR L . Mean interocclusal distortion ranged from -257.2 to -85.2 μm for dR M , -285.7 to 183.9 μm for dX M , -100.5 to 114.8 μm for dY M , and -269.1 to -50.6 μm for dZ M . ANOVA showed that articulation method had significant effect on dR R and dX M , while system had a significant effect on dR R , dR C , dR L , dR M , and dZ M . There were significant differences between 6 test groups for dR R, dR L dX M , and dZ M . dR R and dX M were significantly greater in AG-IR, and this was significantly different from SIR-IR, ZKN-IR, and all MO groups. Interarch and interocclusal distances increased in MO groups, while they decreased in IR groups. AG-IR had the greatest interarch distortion as well as interocclusal superior-inferior distortion. The other groups performed similarly to each other, and the overall interarch distortion did not exceed 0.7%. In these systems and articulation methods, interocclusal distortions may result in hyper- or infra-occluded prostheses. © 2017 by the American College of Prosthodontists.
Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min
2010-01-01
The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.
Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong
2006-04-01
This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.
Simulating Geriatric Home Safety Assessments in a Three-Dimensional Virtual World
ERIC Educational Resources Information Center
Andrade, Allen D.; Cifuentes, Pedro; Mintzer, Michael J.; Roos, Bernard A.; Anam, Ramanakumar; Ruiz, Jorge G.
2012-01-01
Virtual worlds could offer inexpensive and safe three-dimensional environments in which medical trainees can learn to identify home safety hazards. Our aim was to evaluate the feasibility, usability, and acceptability of virtual worlds for geriatric home safety assessments and to correlate performance efficiency in hazard identification with…
Hayashi, Kazuo; Chung, Onejune; Park, Seojung; Lee, Seung-Pyo; Sachdeva, Rohit C L; Mizoguchi, Itaru
2015-03-01
Virtual 3-dimensional (3D) models obtained by scanning of physical casts have become an alternative to conventional dental cast analysis in orthodontic treatment. If the precision (reproducibility) of virtual 3D model analysis can be further improved, digital orthodontics could be even more widely accepted. The purpose of this study was to clarify the influence of "standardization" of the target points for dental cast analysis using virtual 3D models. Physical plaster models were also measured to obtain additional information. Five sets of dental casts were used. The dental casts were scanned with R700 (3Shape, Copenhagen, Denmark) and REXCAN DS2 3D (Solutionix, Seoul, Korea) scanners. In this study, 3 system and software packages were used: SureSmile (OraMetrix, Richardson, Tex), Rapidform (Inus, Seoul, Korea), and I-DEAS (SDRC, Milford, Conn). Without standardization, the maximum differences were observed between the SureSmile software and the Rapidform software (0.39 mm ± 0.07). With standardization, the maximum differences were observed between the SureSmile software and measurements with a digital caliper (0.099 mm ± 0.01), and this difference was significantly greater (P <0.05) than the 2 other mean difference values. Furthermore, the results of this study showed that the mean differences "WITH" standardization were significantly lower than those "WITHOUT" standardization for all systems, software packages, or methods. The results showed that elimination of the influence of usability or habituation is important for improving the reproducibility of dental cast analysis. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime
2018-05-02
This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.
[Application and prospect of digital technology in the field of orthodontics].
Zhou, Y H
2016-06-01
The three-dimensional(3D)digital technology has brought a revolutionary change in diagnostic planning and treatment strategy of orthodontics. Acquisition of 3D image data of the hard and soft tissues of the patients, diagnostic analysis and treatment prediction, and ultimately the individualized orthodontic appliance, will become the development trend and workflow of the 3D orthodontics. With the development of 3D digital technology, the traditional plaster model has been gradually replacing by 3D digital models. Meanwhile, 3D facial soft tissue scan and cone-beam CT scan have been gradually applied to clinical orthodontics, making it possible to get 3D virtual anatomical structure for patients. With the help of digital technology, the diagnostic process is much easier for orthodontist. However how to command the whole digital workflow and put it into practice in the daily work is still a long way to go. The purpose of this article is to enlighten the orthodontists interested in digital technology and discuss the future of digital orthodontics in China.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
ERIC Educational Resources Information Center
Zacharis, Georgios S.; Mikropoulos, Tassos A.; Priovolou, Chryssi
2013-01-01
Previous studies report the involvement of specific brain activation in stereoscopic vision and the perception of depth information. This work presents the first comparative results of adult women on the effects of stereoscopic perception in three different static environments; a real, a two dimensional (2D) and a stereoscopic three dimensional…
NASA Astrophysics Data System (ADS)
Hatfield, Fraser N.; Dehmeshki, Jamshid
1998-09-01
Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.
Zeng, Canjun; Xing, Weirong; Wu, Zhanglin; Huang, Huajun; Huang, Wenhua
2016-10-01
Treatment of acetabular fractures remains one of the most challenging tasks that orthopaedic surgeons face. An accurate assessment of the injuries and preoperative planning are essential for an excellent reduction. The purpose of this study was to evaluate the feasibility, accuracy and effectiveness of performing 3D printing technology and computer-assisted virtual surgical procedures for preoperative planning in acetabular fractures. We hypothesised that more accurate preoperative planning using 3D printing models will reduce the operation time and significantly improve the outcome of acetabular fracture repair. Ten patients with acetabular fractures were recruited prospectively and examined by CT scanning. A 3-D model of each acetabular fracture was reconstructed with MIMICS14.0 software from the DICOM file of the CT data. Bone fragments were moved and rotated to simulate fracture reduction and restore the pelvic integrity with virtual fixation. The computer-assisted 3D image of the reduced acetabula was printed for surgery simulation and plate pre-bending. The postoperative CT scan was performed to compare the consistency of the preoperative planning with the surgical implants by 3D-superimposition in MIMICS14.0, and evaluated by Matta's method. Computer-based pre-operations were precisely mimicked and consistent with the actual operations in all cases. The pre-bent fixation plates had an anatomical shape specifically fit to the individual pelvis without further bending or adjustment at the time of surgery and fracture reductions were significantly improved. Seven out of 10 patients had a displacement of fracture reduction of less than 1mm; 3 cases had a displacement of fracture reduction between 1 and 2mm. The 3D printing technology combined with virtual surgery for acetabular fractures is feasible, accurate, and effective leading to improved patient-specific preoperative planning and outcome of real surgery. The results provide useful technical tips in planning pelvic surgeries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fischer, Gerrit; Stadie, Axel; Schwandt, Eike; Gawehn, Joachim; Boor, Stephan; Marx, Juergen; Oertel, Joachim
2009-05-01
The aim of the authors in this study was to introduce a minimally invasive superficial temporal artery to middle cerebral artery (STA-MCA) bypass surgery by the preselection of appropriate donor and recipient branches in a 3D virtual reality setting based on 3-T MR angiography data. An STA-MCA anastomosis was performed in each of 5 patients. Before surgery, 3-T MR imaging was performed with 3D magnetization-prepared rapid acquisition gradient echo sequences, and a high-resolution CT 3D dataset was obtained. Image fusion and the construction of a 3D virtual reality model of each patient were completed. In the 3D virtual reality setting, the skin surface, skull surface, and extra- and intracranial arteries as well as the cortical brain surface could be displayed in detail. The surgical approach was successfully visualized in virtual reality. The anatomical relationship of structures of interest could be evaluated based on different values of translucency in all cases. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be calculated with high accuracy preoperatively and determined as the center point of the following minicraniotomy. Localization of the craniotomy and the skin incision on top of the STA branch was calculated with the system, and these data were transferred onto the patient's skin before surgery. In all cases the preselected arteries could be found intraoperatively in exact agreement with the preoperative planning data. Successful extracranial-intracranial bypass surgery was achieved without stereotactic neuronavigation via a preselected minimally invasive approach in all cases. Subsequent enlargement of the craniotomy was not necessary. Perioperative complications were not observed. All bypasses remained patent on follow-up. With the application of a 3D virtual reality planning system, the extent of skin incision and tissue trauma as well as the size of the bone flap was minimal. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be preoperatively determined with high accuracy so that the STA-MCA bypass could be safely and effectively performed through an optimally located minicraniotomy with a mean diameter of 22 mm without the need for stereotactic guidance.
Virtual Reality Website of Indonesia National Monument and Its Environment
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.
2017-02-01
National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.
[Quality assurance of a virtual simulation software: application to IMAgo and SIMAgo (ISOgray)].
Isambert, A; Beaudré, A; Ferreira, I; Lefkopoulos, D
2007-06-01
Virtual simulation process is often used to prepare three dimensional conformal radiation therapy treatments. As the quality of the treatment is widely dependent on this step, it is mandatory to perform extensive controls on this software before clinical use. The tests presented in this work have been carried out on the treatment planning system ISOgray (DOSIsoft), including the delineation module IMAgo and the virtual simulation module SIMAgo. According to our experience, the most relevant controls of international protocols have been selected. These tests mainly focused on measuring and delineation tools, virtual simulation functionalities, and have been performed with three phantoms: the Quasar Multi-Purpose Body Phantom, the Quasar MLC Beam Geometry Phantom (Modus Medical Devices Inc.) and a phantom developed at Hospital Tenon. No major issues have been identified while performing the tests. These controls have emphasized the necessity for the user to consider with a critical eye the results displayed by a virtual simulation software. The contrast of visualisation, the slice thickness, the calculation and display mode of 3D structures used by the software are many factors of uncertainties. A virtual simulation software quality assurance procedure has been written and applied on a set of CT images. Similar tests have to be performed periodically and at minimum at each change of major version.
Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder
NASA Technical Reports Server (NTRS)
Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.
1999-01-01
We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
Holographic and light-field imaging for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jong-Young; Jang, Changwon; Jeong, Jinsoo; Lee, Chang-Kun
2017-02-01
We discuss on the recent state of the augmented reality (AR) display technology. In order to realize AR, various seethrough three-dimensional (3D) display techniques have been reported. We describe the AR display with 3D functionality such as light-field display and holography. See-through light-field display can be categorized by the optical elements which are used for see-through property: optical elements controlling path of the light-fields and those generating see-through light-field. Holographic display can be also a good candidate for AR display because it can reconstruct wavefront information and provide realistic virtual information. We introduce the see-through holographic display using various optical techniques.
Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro
2005-03-01
In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.
3D indoor modeling using a hand-held embedded system with multiple laser range scanners
NASA Astrophysics Data System (ADS)
Hu, Shaoxing; Wang, Duhu; Xu, Shike
2016-10-01
Accurate three-dimensional perception is a key technology for many engineering applications, including mobile mapping, obstacle detection and virtual reality. In this article, we present a hand-held embedded system designed for constructing 3D representation of structured indoor environments. Different from traditional vehicle-borne mobile mapping methods, the system presented here is capable of efficiently acquiring 3D data while an operator carrying the device traverses through the site. It consists of a simultaneous localization and mapping(SLAM) module, a 3D attitude estimate module and a point cloud processing module. The SLAM is based on a scan matching approach using a modern LIDAR system, and the 3D attitude estimate is generated by a navigation filter using inertial sensors. The hardware comprises three 2D time-flight laser range finders and an inertial measurement unit(IMU). All the sensors are rigidly mounted on a body frame. The algorithms are developed on the frame of robot operating system(ROS). The 3D model is constructed using the point cloud library(PCL). Multiple datasets have shown robust performance of the presented system in indoor scenarios.
EEG Control of a Virtual Helicopter in 3-Dimensional Space Using Intelligent Control Strategies
Royer, Audrey S.; Doud, Alexander J.; Rose, Minn L.
2011-01-01
Films like Firefox, Surrogates, and Avatar have explored the possibilities of using brain-computer interfaces (BCIs) to control machines and replacement bodies with only thought. Real world BCIs have made great progress toward that end. Invasive BCIs have enabled monkeys to fully explore 3-dimensional (3D) space using neuroprosthetics. However, non-invasive BCIs have not been able to demonstrate such mastery of 3D space. Here, we report our work, which demonstrates that human subjects can use a non-invasive BCI to fly a virtual helicopter to any point in a 3D world. Through use of intelligent control strategies, we have facilitated the realization of controlled flight in 3D space. We accomplished this through a reductionist approach that assigns subject-specific control signals to the crucial components of 3D flight. Subject control of the helicopter was comparable when using either the BCI or a keyboard. By using intelligent control strategies, the strengths of both the user and the BCI system were leveraged and accentuated. Intelligent control strategies in BCI systems such as those presented here may prove to be the foundation for complex BCIs capable of doing more than we ever imagined. PMID:20876032
Fónyad, László; Shinoda, Kazunobu; Farkash, Evan A; Groher, Martin; Sebastian, Divya P; Szász, A Marcell; Colvin, Robert B; Yagi, Yukako
2015-03-28
Chronic allograft vasculopathy (CAV) is a major mechanism of graft failure of transplanted organs in humans. Morphometric analysis of coronary arteries enables the quantitation of CAV in mouse models of heart transplantation. However, conventional histological procedures using single 2-dimensional sections limit the accuracy of CAV quantification. The aim of this study is to improve the accuracy of CAV quantification by reconstructing the murine coronary system in 3-dimensions (3D) and using virtual reconstruction and volumetric analysis to precisely assess neointimal thickness. Mouse tissue samples, native heart and transplanted hearts with chronic allograft vasculopathy, were collected and analyzed. Paraffin embedded samples were serially sectioned, stained and digitized using whole slide digital imaging techniques under normal and ultraviolet lighting. Sophisticated software tools were used to generate and manipulate 3D reconstructions of the major coronary arteries and branches. The 3D reconstruction provides not only accurate measurements but also exact volumetric data of vascular lesions. This virtual coronary arteriography demonstrates that the vasculopathy lesions in this model are localized to the proximal coronary segments. In addition, virtual rotation and volumetric analysis enabled more precise measurements of CAV than single, randomly oriented histologic sections, and offer an improved readout for this important experimental model. We believe 3D reconstruction of 2D histological slides will provide new insights into pathological mechanisms in which structural abnormalities play a role in the development of a disease. The techniques we describe are applicable to the analysis of arteries, veins, bronchioles and similar sized structures in a variety of tissue types and disease model systems. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/3772457541477230 .
Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.
2009-01-01
Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
A virtual simulator designed for collision prevention in proton therapy.
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho
2015-10-01
In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.
Surgical results of cranioplasty using three-dimensional printing technology.
Cheng, Cheng-Hsin; Chuang, Hao-Yu; Lin, Hung-Lin; Liu, Chun-Lin; Yao, Chun-Hsu
2018-05-01
The aim of this research was to evaluate the surgical outcome of a new three-dimensional printing (3DP) technique using prefabrication molds and polymethyl methacrylate (PMMA). The study included 10 patients with large skull defects (>100 cm 2 ) who underwent cranioplasty. The causes of the skull defects were trauma (6), bone resorption (2), tumor (1), and infection (1). Before the operation, computed tomography (CT) scans were used to create a virtual plan, and these were then converted to 3-dimensional (3-D) images. The field of the skull defect was blueprinted by the technicians and operators, and a prefabricated 3-D model was generated. During the operation, a PMMA implant was created using a prefabricated silicone rubber mold and fitted into the cranial defect. All patients were followed up for at least 2 years, and any complications after the cranioplasty were recorded. Only 1 patient suffered a complication, subdural effusion 2 months after cranioplasty, which was successfully treated with a subdural peritoneal shunt. All patients satisfied the criteria for operative outcome and cosmetic effect. There were no episodes of infection or material rejection. The 3DP technology allowed precise, fast, and inexpensive craniofacial reconstruction. This technique may be beneficial for shortening the operation time (and thus reducing exposure time to general anesthesia, and wound exposure time, and blood loss), enhancing preoperative evaluation and simplifying the surgical procedure. Copyright © 2018 Elsevier B.V. All rights reserved.
Lim, Won Hee; Park, Eun Woo; Chae, Hwa Sung; Kwon, Soon Man; Jung, Hoi-In; Baek, Seung-Hak
2017-06-01
The purpose of this study was to compare the results of two- (2D) and three-dimensional (3D) measurements for the alveolar molding effect in patients with unilateral cleft lip and palate. The sample consisted of 23 unilateral cleft lip and palate infants treated with nasoalveolar molding (NAM) appliance. Dental models were fabricated at initial visit (T0; mean age, 23.5 days after birth) and after alveolar molding therapy (T1; mean duration, 83 days). For 3D measurement, virtual models were constructed using a laser scanner and 3D software. For 2D measurement, 1:1 ratio photograph images of dental models were scanned by a scanner. After setting of common reference points and lines for 2D and 3D measurements, 7 linear and 5 angular variables were measured at the T0 and T1 stages, respectively. Wilcoxon signed rank test and Bland-Altman analysis were performed for statistical analysis. The alveolar molding effect of the maxilla following NAM treatment was inward bending of the anterior part of greater segment, forward growth of the lesser segment, and decrease in the cleft gap in the greater segment and lesser segment. Two angular variables showed difference in statistical interpretation of the change by NAM treatment between 2D and 3D measurements (ΔACG-BG-PG and ΔACL-BL-PL). However, Bland-Altman analysis did not exhibit significant difference in the amounts of change in these variables between the 2 measurements. These results suggest that the data from 2D measurement could be reliably used in conjunction with that from 3D measurement.
Splitting a colon geometry with multiplanar clipping
NASA Astrophysics Data System (ADS)
Ahn, David K.; Vining, David J.; Ge, Yaorong; Stelts, David R.
1998-06-01
Virtual colonoscopy, a recent three-dimensional (3D) visualization technique, has provided radiologists with a unique diagnostic tool. Using this technique, a radiologist can examine the internal morphology of a patient's colon by navigating through a surface-rendered model that is constructed from helical computed tomography image data. Virtual colonoscopy can be used to detect early forms of colon cancer in a way that is less invasive and expensive compared to conventional endoscopy. However, the common approach of 'flying' through the colon lumen to visually search for polyps is tedious and time-consuming, especially when a radiologist loses his or her orientation within the colon. Furthermore, a radiologist's field of view is often limited by the 3D camera position located inside the colon lumen. We have developed a new technique, called multi-planar geometry clipping, that addresses these problems. Our algorithm divides a complex colon anatomy into several smaller segments, and then splits each of these segments in half for display on a static medium. Multi-planar geometry clipping eliminates virtual colonoscopy's dependence upon expensive, real-time graphics workstations by enabling radiologists to globally inspect the entire internal surface of the colon from a single viewpoint.
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Rewritable three-dimensional holographic data storage via optical forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yetisen, Ali K., E-mail: ayetisen@mgh.harvard.edu; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Montelongo, Yunuen
2016-08-08
The development of nanostructures that can be reversibly arranged and assembled into 3D patterns may enable optical tunability. However, current dynamic recording materials such as photorefractive polymers cannot be used to store information permanently while also retaining configurability. Here, we describe the synthesis and optimization of a silver nanoparticle doped poly(2-hydroxyethyl methacrylate-co-methacrylic acid) recording medium for reversibly recording 3D holograms. We theoretically and experimentally demonstrate organizing nanoparticles into 3D assemblies in the recording medium using optical forces produced by the gradients of standing waves. The nanoparticles in the recording medium are organized by multiple nanosecond laser pulses to produce reconfigurablemore » slanted multilayer structures. We demonstrate the capability of producing rewritable optical elements such as multilayer Bragg diffraction gratings, 1D photonic crystals, and 3D multiplexed optical gratings. We also show that 3D virtual holograms can be reversibly recorded. This recording strategy may have applications in reconfigurable optical elements, data storage devices, and dynamic holographic displays.« less
High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)
2001-01-01
A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.
Rousian, M; Groenenberg, I A L; Hop, W C; Koning, A H J; van der Spek, P J; Exalto, N; Steegers, E A P
2013-08-01
The aim of our study was to evaluate the first trimester cerebellar growth and development using 2 different measuring techniques: 3-dimensional (3D) and virtual reality (VR) ultrasound visualization. The cerebellum measurements were related to gestational age (GA) and crown-rump length (CRL). Finally, the reproducibility of both the methods was tested. In a prospective cohort study, we collected 630 first trimester, serially obtained, 3D ultrasound scans of 112 uncomplicated pregnancies between 7 + 0 and 12 + 6 weeks of GA. Only scans with high-quality images of the fossa posterior were selected for the analysis. Measurements were performed offline in the coronal plane using 3D (4D view) and VR (V-Scope) software. The VR enables the observer to use all available dimensions in a data set by visualizing the volume as a "hologram." Total cerebellar diameter, left, and right hemispheric diameter, and thickness were measured using both the techniques. All measurements were performed 3 times and means were used in repeated measurements analysis. After exclusion criteria were applied 177 (28%) 3D data sets were available for further analysis. The median GA was 10 + 0 weeks and the median CRL was 31.4 mm (range: 5.2-79.0 mm). The cerebellar parameters could be measured from 7 gestational weeks onward. The total cerebellar diameter increased from 2.2 mm at 7 weeks of GA to 13.9 mm at 12 weeks of GA using VR and from 2.2 to 13.8 mm using 3D ultrasound. The reproducibility, established in a subset of 35 data sets, resulted in intraclass correlation coefficient values ≥0.98. It can be concluded that cerebellar measurements performed by the 2 methods proved to be reproducible and comparable with each other. However, VR-using all three dimensions-provides a superior method for the visualization of the cerebellum. The constructed reference values can be used to study normal and abnormal cerebellar growth and development.
de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B
2018-02-01
We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. 3D models of commercially available cannula were positioned in 3D models of the airway. In study (1), a cohort that underwent tracheostomy between 2013 and 2015 was selected (n = 26). The cannula was virtually placed in the airway in the pre-operative CT scan and its position was compared to the cannula position on post-operative CT scans. In study (2), a cohort with neuromuscular disease (n = 14) was analyzed. Virtual cannula placing was performed in CT scans and tested if problems could be anticipated. Finally (3), for a patient with Duchenne muscular dystrophy and complications of conventional tracheostomy cannula, a patient-specific cannula was 3D designed, fabricated, and placed. (1) The 3D planned and post-operative tracheostomy position differed significantly. (2) Three groups of patients were identified: (A) normal anatomy; (B) abnormal anatomy, commercially available cannula fits; and (C) abnormal anatomy, custom-made cannula, may be necessary. (3) The position of the custom-designed cannula was optimal and the trachea healed. Virtual planning of the tracheostomy did not correlate with actual cannula position. Identifying patients with abnormal airway anatomy in whom commercially available cannula cannot be optimally positioned is advantageous. Patient-specific cannula design based on 3D virtualization of the airway was beneficial in a patient with abnormal airway anatomy.
Yu, Zhengyang; Zheng, Shusen; Chen, Huaiqing; Wang, Jianjun; Xiong, Qingwen; Jing, Wanjun; Zeng, Yu
2006-10-01
This research studies the process of dynamic concision and 3D reconstruction from medical body data using VRML and JavaScript language, focuses on how to realize the dynamic concision of 3D medical model built with VRML. The 2D medical digital images firstly are modified and manipulated by 2D image software. Then, based on these images, 3D mould is built with VRML and JavaScript language. After programming in JavaScript to control 3D model, the function of dynamic concision realized by Script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be formed in high quality near to those got in traditional methods. By this way, with the function of dynamic concision, VRML browser can offer better windows of man-computer interaction in real time environment than before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and has a promising prospect in the fields of medical image.
Capacity of Heterogeneous Mobile Wireless Networks with D-Delay Transmission Strategy.
Wu, Feng; Zhu, Jiang; Xi, Zhipeng; Gao, Kai
2016-03-25
This paper investigates the capacity problem of heterogeneous wireless networks in mobility scenarios. A heterogeneous network model which consists of n normal nodes and m helping nodes is proposed. Moreover, we propose a D-delay transmission strategy to ensure that every packet can be delivered to its destination nodes with limited delay. Different from most existing network schemes, our network model has a novel two-tier architecture. The existence of helping nodes greatly improves the network capacity. Four types of mobile networks are studied in this paper: i.i.d. fast mobility model and slow mobility model in two-dimensional space, i.i.d. fast mobility model and slow mobility model in three-dimensional space. Using the virtual channel model, we present an intuitive analysis of the capacity of two-dimensional mobile networks and three-dimensional mobile networks, respectively. Given a delay constraint D, we derive the asymptotic expressions for the capacity of the four types of mobile networks. Furthermore, the impact of D and m to the capacity of the whole network is analyzed. Our findings provide great guidance for the future design of the next generation of networks.
NASA Astrophysics Data System (ADS)
Moussaoui, H.; Debayle, J.; Gavet, Y.; Delette, G.; Hubert, M.; Cloetens, P.; Laurencin, J.
2017-03-01
A strong correlation exists between the performance of Solid Oxide Cells (SOCs), working either in fuel cell or electrolysis mode, and their electrodes microstructure. However, the basic relationships between the three-dimensional characteristics of the microstructure and the electrode properties are not still precisely understood. Thus, several studies have been recently proposed in an attempt to improve the knowledge of such relations, which are essential before optimizing the microstructure, and hence, designing more efficient SOC electrodes. In that frame, an original model has been adapted to generate virtual 3D microstructures of typical SOCs electrodes. Both the oxygen electrode, which is made of porous LSCF, and the hydrogen electrodes, made of porous Ni-YSZ, have been studied. In this work, the synthetic microstructures are generated by the so-called 3D Gaussian `Random Field model'. The morphological representativeness of the virtual porous media have been validated on real 3D electrode microstructures of a commercial cell, obtained by X-ray nano-tomography at the European Synchrotron Radiation Facility (ESRF). This validation step includes the comparison of the morphological parameters like the phase covariance function and granulometry as well as the physical parameters like the `apparent tortuosity'. Finally, this validated tool will be used, in forthcoming studies, to identify the optimal microstructure of SOCs.
Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim
2016-09-01
Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.
Collaborative Aerial-Drawing System for Supporting Co-Creative Communication
NASA Astrophysics Data System (ADS)
Osaki, Akihiro; Taniguchi, Hiroyuki; Miwa, Yoshiyuki
This paper describes the collaborative augmented reality (AR) system with which multiple users can handwrite 3D lines in the air simultaneously and manipulate the lines directly in the real world. In addition, we propose a new technique for co-creative communication utilizing the 3D drawing activity. Up to now, the various 3D user interfaces have been proposed. Although most of them aim to solve the specific problems in the virtual environments, the possibility of the 3D drawing expression has not been explored yet. Accordingly, we paid special attention to the interaction with the real objects in daily life, and considered to manipulate real objects and 3D lines without any distinctions by the same action. The developed AR system consists of a stereoscopic head-mounted display, a drawing tool, 6DOF sensors measuring three-dimensional position and Euler angles, and the 3D user interface, which enables to push, grasp and pitch 3D lines directly by use of the drawing tool. Additionally users can pick up desired color from either a landscape or a virtual line through the direct interaction with this tool. For sharing 3D lines among multiple users at the same place, the distributed-type AR system has been developed that mutually sends and receives drawn data between systems. With the developed system, users can proceed to design jointly in the real space through arranging each 3D drawing by direct manipulation. Moreover, a new application to the entertainment has become possible to play sports like catch, fencing match, or the like.
Sato, Mitsuru; Tateishi, Kensuke; Murata, Hidetoshi; Kin, Taichi; Suenaga, Jun; Takase, Hajime; Yoneyama, Tomohiro; Nishii, Toshiaki; Tateishi, Ukihide; Yamamoto, Tetsuya; Saito, Nobuhito; Inoue, Tomio; Kawahara, Nobutaka
2018-06-26
The utility of surgical simulation with three-dimensional multimodality fusion imaging (3D-MFI) has been demonstrated. However, its potential in deep-seated brain lesions remains unknown. The aim of this study was to investigate the impact of 3D-MFI in deep-seated meningioma operations. Fourteen patients with deeply located meningiomas were included in this study. We constructed 3D-MFIs by fusing high-resolution magnetic resonance (MR) and computed tomography (CT) images with a rotational digital subtraction angiogram (DSA) in all patients. The surgical procedure was simulated by 3D-MFI prior to operation. To assess the impact on neurosurgical education, the objective values of surgical simulation by 3D-MFIs/virtual reality (VR) video were evaluated. To validate the quality of 3D-MFIs, intraoperative findings were compared. The identification rate (IR) and positive predictive value (PPV) for the tumor feeding arteries and involved perforating arteries and veins were also assessed for quality assessment of 3D-MFI. After surgical simulation by 3D-MFIs, near-total resection was achieved in 13 of 14 (92.9%) patients without neurological complications. 3D-MFIs significantly contributed to the understanding of surgical anatomy and optimal surgical view (p < .0001) and learning how to preserve critical vessels (p < .0001) and resect tumors safety and extensively (p < .0001) by neurosurgical residents/fellows. The IR of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 100% and 92.9%, respectively. The PPV of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 98.8% and 76.5%, respectively. 3D-MFI contributed to learn skull base meningioma surgery. Also, 3D-MFI provided high quality to identify critical anatomical structures within or adjacent to deep-seated meningiomas. Thus, 3D-MFI is promising educational and surgical planning tool for meningiomas in deep-seated regions.
Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Bourantas, Christos V; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Serruys, Patrick W; Michalis, Lampros K
2018-03-01
Fractional flow reserve (FFR) has been established as a useful diagnostic tool. The distal coronary pressure to aortic pressure (Pd/Pa) ratio at rest is a simpler physiologic index but also requires the use of the pressure wire, whereas recently proposed virtual functional indices derived from coronary imaging require complex blood flow modelling and/or are time-consuming. Our aim was to test the diagnostic performance of virtual resting Pd/Pa using routine angiographic images and a simple flow model. Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by FFR. The resting Pd/Pa for each lesion was assessed by computational fluid dynamics. The discriminatory power of virtual resting Pd/Pa against FFR (reference: ≤0.80) was high (area under the receiver operator characteristic curve [AUC]: 90.5% [95% CI: 85.4-95.6%]). Diagnostic accuracy, sensitivity and specificity for the optimal virtual resting Pd/Pa cut-off (≤0.94) were 84.9%, 90.4% and 81.6%, respectively. Virtual resting Pd/Pa demonstrated superior performance (p<0.001) versus 3D-QCA %area stenosis (AUC: 77.5% [95% CI: 69.8-85.3%]). There was a good correlation between virtual resting Pd/Pa and FFR (r=0.69, p<0.001). Virtual resting Pd/Pa using routine angiographic data and a simple flow model provides fast functional assessment of coronary lesions without requiring the pressure-wire and hyperaemia induction. The high diagnostic performance of virtual resting Pd/Pa for predicting FFR shows promise for using this simple/fast virtual index in clinical practice. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Wong, Kuan Yee; Esguerra, Roxanna Jean; Chia, Vanessa Ai Ping; Tan, Ying Han; Tan, Keson Beng Choon
2018-02-01
Prior studies have defined the accuracy of intraoral scanner (IOS) systems but the accuracy of the digital static interocclusal registration function of these systems has not been reported. This study compared the three-dimensional (3D) accuracy of the digital static interocclusal registration of 3 IOS systems using the buccal bite scan function. Three IOS systems compared were 3M TM True Definition Scanner (TDS), TRIOS Color (TRC), and CEREC AC with CEREC Omnicam (CER). Using each scanner, 7 scans (n = 7) of the mounted and articulated SLA master models were obtained. The measurement targets (SiN reference spheres and implant abutment analogs) were in the opposing models at the right (R), central (C), and left (L) regions; abutments #26 and #36, respectively. A coordinate measuring machine with metrology software compared the physical and virtual targets to derive the global 3D linear distortion between the centroids of the respective target reference spheres and abutment analogs (dR R , dR C , dR L , and dR M ) and 2D distances between the pierce points of the abutment analogs (dX M , dY M , dZ M ), with 3 measurement repetitions for each scan. Mean 3D distortion ranged from -471.9 to 31.7 μm for dR R , -579.0 to -87.0 μm for dR C , -381.5 to 69.4 μm for dR L , and -184.9 to -23.1 μm for dR M . Mean 2D distortion ranged from -225.9 to 0.8 μm for dX M , -130.6 to -126.1 μm for dY M , and -34.3 to 26.3 μm for dZ M . Significant differences were found for interarch distortions across the three systems. For dR R and dR L , all three test groups were significantly different, whereas for dR C , the TDS was significantly different from the TRC and CER. For 2D distortion, significant differences were found for dX M only. Interarch and global interocclusal distortions for the three IOS systems were significantly different. TRC performed overall the best and TDS was the worst. The interarch (dR R , dR C , dR L ) and interocclusal (dX M ) distortions observed will affect the magnitude of occlusal contacts of restorations clinically. The final restoration may be either hyperoccluded or infraoccluded, requiring compensations during the CAD design stage or clinical adjustments at issue. © 2017 by the American College of Prosthodontists.
Gee, Carole T
2013-11-01
As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.
Visualizing 3D data obtained from microscopy on the Internet.
Pittet, J J; Henn, C; Engel, A; Heymann, J B
1999-01-01
The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.
Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim
2017-06-01
The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system
NASA Astrophysics Data System (ADS)
Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo
2010-02-01
A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.
NASA Astrophysics Data System (ADS)
Makhijani, Vinod B.; Przekwas, Andrzej J.
2002-10-01
This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).
Modeling liver physiology: combining fractals, imaging and animation.
Lin, Debbie W; Johnson, Scott; Hunt, C Anthony
2004-01-01
Physiological modeling of vascular and microvascular networks in several key human organ systems is critical for a deeper understanding of pharmacology and the effect of pharmacotherapies on disease. Like the lung and the kidney, the morphology of its vascular and microvascular system plays a major role in its functional capability. To understand liver function in absorption and metabolism of food and drugs, one must examine the morphology and physiology at both higher and lower level liver function. We have developed validated virtualized dynamic three dimensional (3D) models of liver secondary units and primary units by combining a number of different methods: three-dimensional rendering, fractals, and animation. We have simulated particle dynamics in the liver secondary unit. The resulting models are suitable for use in helping researchers easily visualize and gain intuition on results of in silico liver experiments.
ERIC Educational Resources Information Center
D' Alba, Adriana; Jones, Greg; Wright, Robert
2015-01-01
This paper discusses a study conducted in the fall of 2011 and the spring of 2012 which explored the use of existing 3D virtual environment technologies by bringing a selected permanent museum exhibit displayed at a museum located in central Mexico into an online 3Dimensional experience. Using mixed methods, the research study analyzed knowledge…
Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco
2009-01-01
3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618
Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco
2009-01-01
3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.
Heritage House Maintenance Using 3d City Model Application Domain Extension Approach
NASA Astrophysics Data System (ADS)
Mohd, Z. H.; Ujang, U.; Liat Choon, T.
2017-11-01
Heritage house is part of the architectural heritage of Malaysia that highly valued. Many efforts by the Department of Heritage to preserve this heritage house such as monitoring the damage problems of heritage house. The damage problems of heritage house might be caused by wooden decay, roof leakage and exfoliation of wall. One of the initiatives for maintaining and documenting this heritage house is through Three-dimensional (3D) of technology. 3D city models are widely used now and much used by researchers for management and analysis. CityGML is a standard tool that usually used by researchers to exchange, storing and managing virtual 3D city models either geometric and semantic information. Moreover, it also represent multi-scale of 3D model in five level of details (LoDs) whereby each of level give a distinctive functions. The extension of CityGML was recently introduced and can be used for problems monitoring and the number of habitants of a house.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Application of reverse engineering in the medical industry.
NASA Astrophysics Data System (ADS)
Kaleev, A. A.; Kashapov, L. N.; Kashapov, N. F.; Kashapov, R. N.
2017-09-01
The purpose of this research is to develop on the basis of existing analogs new design of ophthalmologic microsurgical tweezers by using reverse engineering techniques. Virtual model was obtained by using a three-dimensional scanning system Solutionix Rexcan 450 MP. Geomagic Studio program was used to remove defects and inaccuracies of the obtained parametric model. A prototype of the finished model was made on the installation of laser stereolithography Projet 6000. Total time of the creation was 16 hours from the reverse engineering procedure to 3D-printing of the prototype.
ERIC Educational Resources Information Center
Lin, Ming-Chao; Tutwiler, M. Shane; Chang, Chun-Yen
2011-01-01
This study investigated the relationship between the use of a three-dimensional Virtual Reality Learning Environment for Field Trip (3DVLE[subscript (ft)]) system and the achievement levels of senior high school earth science students. The 3DVLE[subscript (ft)] system was presented in two separate formats: Teacher Demonstrated Based and Student…
Micro-tomography based Geometry Modeling of Three-Dimensional Braided Composites
NASA Astrophysics Data System (ADS)
Fang, Guodong; Chen, Chenghua; Yuan, Shenggang; Meng, Songhe; Liang, Jun
2018-06-01
A tracking and recognizing algorithm is proposed to automatically generate irregular cross-sections and central path of braid yarn within the 3D braided composites by using sets of high resolution tomography images. Only the initial cross-sections of braid yarns in a tomography image after treatment are required to be calibrated manually as searching cross-section template. The virtual geometry of 3D braided composites including some detailed geometry information, such as the braid yarn squeezing deformation, braid yarn distortion and braid yarn path deviation etc., can be reconstructed. The reconstructed geometry model can reflect the change of braid configurations during solidification process. The geometry configurations and mechanical properties of the braided composites are analyzed by using the reconstructed geometry model.
Interactive 3D Models and Simulations for Nuclear Security Education, Training, and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner, David K.; Dickens, Brian Scott; Heimer, Donovan J.
By providing examples of products that have been produced in the past, it is the hopes of the authors that the audience will have a more thorough understanding of 3D modeling tools, potential applications, and capabilities that they can provide. Truly the applications and capabilities of these types of tools are only limited by one’s imagination. The future of three-dimensional models lies in the expansion into the world of virtual reality where one will experience a fully immersive first-person environment. The use of headsets and hand tools will allow students and instructors to have a more thorough spatial understanding ofmore » facilities and scenarios that they will encounter in the real world.« less
Kuric, Katelyn M; Harris, Bryan T; Morton, Dean; Azevedo, Bruno; Lin, Wei-Shao
2017-09-29
This clinical report describes a digital workflow using extraoral digital photographs and volumetric datasets from cone beam computed tomography (CBCT) imaging to create a 3-dimensional (3D), virtual patient with photorealistic appearance. In a patient with microstomia, hinge axis approximation, diagnostic casts simulating postextraction alveolar ridge profile, and facial simulation of prosthetic treatment outcome were completed in a 3D, virtual environment. The approach facilitated the diagnosis, communication, and patient acceptance of the treatment of maxillary and mandibular computer-aided design and computer-aided manufacturing (CAD-CAM) of immediate dentures at increased occlusal vertical dimension. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Creating Physical 3D Stereolithograph Models of Brain and Skull
Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.
2007-01-01
The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879
Office-Based Three-Dimensional Printing Workflow for Craniomaxillofacial Fracture Repair.
Elegbede, Adekunle; Diaconu, Silviu C; McNichols, Colton H L; Seu, Michelle; Rasko, Yvonne M; Grant, Michael P; Nam, Arthur J
2018-03-08
Three-dimensional printing of patient-specific models is being used in various aspects of craniomaxillofacial reconstruction. Printing is typically outsourced to off-site vendors, with the main disadvantages being increased costs and time for production. Office-based 3-dimensional printing has been proposed as a means to reduce costs and delays, but remains largely underused because of the perception among surgeons that it is futuristic, highly technical, and prohibitively expensive. The goal of this report is to demonstrate the feasibility and ease of incorporating in-office 3-dimensional printing into the standard workflow for facial fracture repair.Patients with complex mandible fractures requiring open repair were identified. Open-source software was used to create virtual 3-dimensional skeletal models of the, initial injury pattern, and then the ideally reduced fractures based on preoperative computed tomography (CT) scan images. The virtual 3-dimensional skeletal models were then printed in our office using a commercially available 3-dimensional printer and bioplastic filament. The 3-dimensional skeletal models were used as templates to bend and shape titanium plates that were subsequently used for intraoperative fixation.Average print time was 6 hours. Excluding the 1-time cost of the 3-dimensional printer of $2500, roughly the cost of a single commercially produced model, the average material cost to print 1 model mandible was $4.30. Postoperative CT imaging demonstrated precise, predicted reduction in all patients.Office-based 3-dimensional printing of skeletal models can be routinely used in repair of facial fractures in an efficient and cost-effective manner.
Kim, Hong-Kyun; Moon, Sung-Chul; Lee, Shin-Jae; Park, Young-Seok
2012-05-01
The palatine rugae have been suggested as stable reference points for superimposing 3-dimensional virtual models before and after orthodontic treatment. We investigated 3-dimensional changes in the palatine rugae of children over 9 years. Complete dental stone casts were biennially prepared for 56 subjects (42 girls, 14 boys) aged from 6 to 14 years. Using 3-dimensional laser scanning and reconstruction software, virtual casts were constructed. Medial and lateral points of the first anterior 3 rugae were defined as the 3-dimensional landmarks. The length of each ruga and the distance between the end points of the rugae were measured in virtual 3-dimensional space. The measurement changes over time were analyzed by using the mixed-effect method for longitudinal data. There were slight increases in the linear measurements in the rugae areas: the lengths of the rugae and the distances between them during the observation period. However, the amounts of the increments were relatively small when compared with the initial values and individual random variability. Although age affected the linear dimensions significantly, it was not clinically significant; the rugae were relatively stable. The use of the palatine rugae as reference points for superimposing and evaluating changes during orthodontic treatment was thought to be possible with special cautions. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Chengliang; Amati, Giancarlo; Gullick, Nicola; Oakley, Stephen; Hurmusiadis, Vassilios; Schaeffter, Tobias; Penney, Graeme; Rhode, Kawal
2009-02-01
Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment the learning process. One of the limitations of these simulators is the generic models that are used. We propose to develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The registration between the two modalities was computed using a combination of XMR and camera calibration, and optical tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic plastic knee model that is used for the arthroscopy training.
NASA Astrophysics Data System (ADS)
Stanco, Filippo; Tanasi, Davide; Allegra, Dario; Milotta, Filippo Luigi Maria; Lamagna, Gioconda; Monterosso, Giuseppina
2017-01-01
This paper deals with a virtual anastylosis of a Greek Archaic statue from ancient Sicily and the development of a public outreach protocol for those with visual impairment or cognitive disabilities through the application of three-dimensional (3-D) printing and haptic technology. The case study consists of the marble head from Leontinoi in southeastern Sicily, acquired in the 18th century and later kept in the collection of the Museum of Castello Ursino in Catania, and a marble torso, retrieved in 1904 and since then displayed in the Archaeological Museum of Siracusa. Due to similar stylistic features, the two pieces can be dated to the end of the sixth century BC. Their association has been an open problem, largely debated by scholars, who have based their hypotheses on comparisons between pictures, but the reassembly of the two artifacts was never attempted. As a result the importance of such an artifact, which could be the only intact Archaic statue of a kouros ever found in Greek Sicily, has not fully been grasped by the public. Consequently, the curatorial dissemination of the knowledge related with such artifacts is purely based on photographic material. As a response to this scenario, the two objects have been 3-D scanned and virtually reassembled. The result has been shared digitally with the public via a web platform and, in order to include increased accessibility for the public with physical or cognitive disabilities, copies of the reassembled statue have been 3-D printed and an interactive test with the 3-D model has been carried out with a haptic device.
Computation of Coupled Thermal-Fluid Problems in Distributed Memory Environment
NASA Technical Reports Server (NTRS)
Wei, H.; Shang, H. M.; Chen, Y. S.
2001-01-01
The thermal-fluid coupling problems are very important to aerospace and engineering applications. Instead of analyzing heat transfer and fluid flow separately, this study merged two well-accepted engineering solution methods, SINDA for thermal analysis and FDNS for fluid flow simulation, into a unified multi-disciplinary thermal fluid prediction method. A fully conservative patched grid interface algorithm for arbitrary two-dimensional and three-dimensional geometry has been developed. The state-of-the-art parallel computing concept was used to couple SINDA and FDNS for the communication of boundary conditions through PVM (Parallel Virtual Machine) libraries. Therefore, the thermal analysis performed by SINDA and the fluid flow calculated by FDNS are fully coupled to obtain steady state or transient solutions. The natural convection between two thick-walled eccentric tubes was calculated and the predicted results match the experiment data perfectly. A 3-D rocket engine model and a real 3-D SSME geometry were used to test the current model, and the reasonable temperature field was obtained.
Data representation for joint kinematics simulation of the lower limb within an educational context.
Van Sint Jan, Serge; Hilal, Isam; Salvia, Patrick; Sholukha, Victor; Poulet, Pascal; Kirokoya, Ibrahim; Rooze, Marcel
2003-04-01
Three-dimensional (3D) visualization is becoming increasingly frequent in both qualitative and quantitative biomechanical studies of anatomical structures involving multiple data sources (e.g. morphological data and kinematics data). For many years, this kind of experiment was limited to the use of bi-dimensional images due to a lack of accurate 3D data. However, recent progress in medical imaging and computer graphics has forged new perspectives. Indeed, new techniques allow the development of an interactive interface for the simulation of human motions combining data from both medical imaging (i.e., morphology) and biomechanical studies (i.e., kinematics). Fields of application include medical education, biomechanical research and clinical research. This paper presents an experimental protocol for the development of anatomically realistic joint simulation within a pedagogical context. Results are shown for the lower limb. Extension to other joints is straightforward. This work is part of the Virtual Animation of the Kinematics of the Human project (VAKHUM) (http://www.ulb.ac.be/project/vakhum).
Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei
2018-05-01
To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.
Attardi, Stefanie M; Barbeau, Michele L; Rogers, Kem A
2018-03-01
An online section of a face-to-face (F2F) undergraduate (bachelor's level) anatomy course with a prosection laboratory was offered in 2013-2014. Lectures for F2F students (353) were broadcast to online students (138) using Blackboard Collaborate (BBC) virtual classroom. Online laboratories were offered using BBC and three-dimensional (3D) anatomical computer models. This iteration of the course was modified from the previous year to improve online student-teacher and student-student interactions. Students were divided into laboratory groups that rotated through virtual breakout rooms, giving them the opportunity to interact with three instructors. The objectives were to assess student performance outcomes, perceptions of student-teacher and student-student interactions, methods of peer interaction, and helpfulness of the 3D computer models. Final grades were statistically identical between the online and F2F groups. There were strong, positive correlations between incoming grade average and final anatomy grade in both groups, suggesting prior academic performance, and not delivery format, predicts anatomy grades. Quantitative student perception surveys (273 F2F; 101 online) revealed that both groups agreed they were engaged by teachers, could interact socially with teachers and peers, and ask them questions in both the lecture and laboratory sessions, though agreement was significantly greater for the F2F students in most comparisons. The most common methods of peer communication were texting, Facebook, and meeting F2F. The perceived helpfulness of the 3D computer models improved from the previous year. While virtual breakout rooms can be used to adequately replace traditional prosection laboratories and improve interactions, they are not equivalent to F2F laboratories. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.
Buck, Ursula; Naether, Silvio; Braun, Marcel; Thali, Michael
2008-09-18
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
Development of three-dimensional memory (3D-M)
NASA Astrophysics Data System (ADS)
Yu, Hong-Yu; Shen, Chen; Jiang, Lingli; Dong, Bin; Zhang, Guobiao
2016-10-01
Since the invention of 3-D ROM in 1996, three-dimensional memory (3D-M) has been under development for nearly two decades. In this presentation, we'll review the 3D-M history and compare different 3D-Ms (including 3D-OTP from Matrix Semiconductor, 3D-NAND from Samsung and 3D-XPoint from Intel/Micron).
Bourantas, Christos V; Kalatzis, Fanis G; Papafaklis, Michail I; Fotiadis, Dimitrios I; Tweddel, Ann C; Kourtis, Iraklis C; Katsouras, Christos S; Michalis, Lampros K
2008-08-01
The development of an automated, user-friendly system (ANGIOCARE), for rapid three-dimensional (3D) coronary reconstruction, integrating angiographic and, intracoronary ultrasound (ICUS) data. Biplane angiographic and ICUS sequence images are imported into the system where a prevalidated method is used for coronary reconstruction. This incorporates extraction of the catheter path from two end-diastolic X-ray images and detection of regions of interest (lumen, outer vessel wall) in the ICUS sequence by an automated border detection algorithm. The detected borders are placed perpendicular to the catheter path and established algorithms used to estimate their absolute orientation. The resulting 3D object is imported into an advanced visualization module with which the operator can interact, examine plaque distribution (depicted as a color coded map) and assess plaque burden by virtual endoscopy. Data from 19 patients (27 vessels) undergoing biplane angiography and ICUS were examined. The reconstructed vessels were 21.3-80.2 mm long. The mean difference was 0.9 +/- 2.9% between the plaque volumes measured using linear 3D ICUS analysis and the volumes, estimated by taking into account the curvature of the vessel. The time required to reconstruct a luminal narrowing of 25 mm was approximately 10 min. The ANGIOCARE system provides rapid coronary reconstruction allowing the operator accurately to estimate the length of the lesion and determine plaque distribution and volume. (c) 2008 Wiley-Liss, Inc.
The three-dimensional Event-Driven Graphics Environment (3D-EDGE)
NASA Technical Reports Server (NTRS)
Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.
1993-01-01
Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.
Mladenović, Milan; Patsilinakos, Alexandros; Pirolli, Adele; Sabatino, Manuela; Ragno, Rino
2017-04-24
Monoamine oxidase B (MAO B) catalyzes the oxidative deamination of aryalkylamines neurotransmitters with concomitant reduction of oxygen to hydrogen peroxide. Consequently, the enzyme's malfunction can induce oxidative damage to mitochondrial DNA and mediates development of Parkinson's disease. Thus, MAO B emerges as a promising target for developing pharmaceuticals potentially useful to treat this vicious neurodegenerative condition. Aiming to contribute to the development of drugs with the reversible mechanism of MAO B inhibition only, herein, an extended in silico-in vitro procedure for the selection of novel MAO B inhibitors is demonstrated, including the following: (1) definition of optimized and validated structure-based three-dimensional (3-D) quantitative structure-activity relationships (QSAR) models derived from available cocrystallized inhibitor-MAO B complexes; (2) elaboration of SAR features for either irreversible or reversible MAO B inhibitors to characterize and improve coumarin-based inhibitor activity (Protein Data Bank ID: 2V61 ) as the most potent reversible lead compound; (3) definition of structure-based (SB) and ligand-based (LB) alignment rule assessments by which virtually any untested potential MAO B inhibitor might be evaluated; (4) predictive ability validation of the best 3-D QSAR model through SB/LB modeling of four coumarin-based external test sets (267 compounds); (5) design and SB/LB alignment of novel coumarin-based scaffolds experimentally validated through synthesis and biological evaluation in vitro. Due to the wide range of molecular diversity within the 3-D QSAR training set and derived features, the selected N probe-derived 3-D QSAR model proves to be a valuable tool for virtual screening (VS) of novel MAO B inhibitors and a platform for design, synthesis and evaluation of novel active structures. Accordingly, six highly active and selective MAO B inhibitors (picomolar to low nanomolar range of activity) were disclosed as a result of rational SB/LB 3D QSAR design; therefore, D123 (IC 50 = 0.83 nM, K i = 0.25 nM) and D124 (IC 50 = 0.97 nM, K i = 0.29 nM) are potential lead candidates as anti-Parkinson's drugs.
NASA Astrophysics Data System (ADS)
Tejeda-Sánchez, C.; Muñoz-Nieto, A.; Rodríguez-Gonzálvez, P.
2018-05-01
Visualization and analysis use to be the final steps in Geomatics. This paper shows the workflow followed to set up a hybrid 3D archaeological viewer. Data acquisition of the site survey was done by means of low-cost close-range photogrammetric methods. With the aim not only to satisfy the general public but also the technicians, a large group of Geomatic products has been obtained (2d plans, 3d models, orthophotos, CAD models coming from vectorization, virtual anastylosis, and cross sections). Finally, all these products have been integrated into a three-dimensional archaeological information system. The hybrid archaeological viewer designed allows a metric and quality approach to the scientific analysis of the ruins, improving, thanks to the implementation of a database, and its potential for queries, the benefits of an ordinary topographic survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, M; Jung, H; Kim, G
2014-06-01
Purpose: To estimate the three dimensional dose distributions in a polymer gel and a radiochromic gel by comparing with the virtual water phantom exposed to proton beams by applying Monte Carlo simulation. Methods: The polymer gel dosimeter is the compositeness material of gelatin, methacrylic acid, hydroquinone, tetrakis, and distilled water. The radiochromic gel is PRESAGE product. The densities of polymer and radiochromic gel were 1.040 and 1.0005 g/cm3, respectively. The shape of water phantom was a hexahedron with the size of 13 × 13 × 15 cm3. The proton beam energies of 72 and 116 MeV were used in themore » simulation. Proton beam was directed to the top of the phantom with Z-axis and the shape of beam was quadrangle with 10 × 10 cm2 dimension. The Percent depth dose and the dose distribution were evaluated for estimating the dose distribution of proton particle in two gel dosimeters, and compared with the virtual water phantom. Results: The Bragg-peak for proton particles in two gel dosimeters was similar to the virtual water phantom. Bragg-peak regions of polymer gel, radiochromic gel, and virtual water phantom were represented in the identical region (4.3 cm) for 72 MeV proton beam. For 116 MeV proton beam, the Bragg-peak regions of polymer gel, radiochromic gel, and virtual water phantom were represented in 9.9, 9.9 and 9.7 cm, respectively. The dose distribution of proton particles in polymer gel, radiochromic gel, and virtual water phantom was approximately identical in the case of 72 and 116 MeV energies. The errors for the simulation were under 10%. Conclusion: This work indicates the evaluation of three dimensional dose distributions by exposing proton particles to polymer and radiochromic gel dosimeter by comparing with the water phantom. The polymer gel and the radiochromic gel dosimeter show similar dose distributions for the proton beams.« less
NASA Astrophysics Data System (ADS)
Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.
2016-03-01
Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.
Andolfi, Ciro; Plana, Alejandro; Kania, Patrick; Banerjee, P Pat; Small, Stephen
2017-05-01
Imaging has a critical impact on surgical decision making and three-dimensional (3D) digital models of patient pathology can now be made commercially. We developed a 3D digital model of a cancer of the head of the pancreas by integrating actual CT data with 3D modeling process. After this process, the virtual pancreatic model was also produced using a high-quality 3D printer. A 56-year-old female with pancreatic head adenocarcinoma presented with biliary obstruction and jaundice. The CT scan showed a borderline resectable tumor with a clear involvement of the gastroduodenal artery but doubtful relationships with the hepatic artery. Our team in collaboration with the Immersive Touch team used multiple series from the CT and segmented the relevant anatomy to understand the physical location of the tumor. An STL file was then developed and printed. Reconstructing and compositing the different series together enhanced the imaging, which allowed clearer observations of the relationship between the mass and the blood vessels, and evidence that the tumor was unresectable. Data files were converted for printing a 100% size rendering model, used for didactic purposes and to discuss with the patient. This study showed that (1) reconstructing enhanced traditional imaging by merging and modeling different series together for a 3D view with diverse angles and transparency, allowing the observation of previously unapparent anatomical details; (2) with this new technology surgeons and residents can preobserve their planned surgical intervention, explore the patient-specific anatomy, and sharpen their procedure choices; (3) high-quality 3D printed models are increasingly useful not only in the clinical realm but also for personalized patient education.
Three-Dimensional Audio Client Library
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2005-01-01
The Three-Dimensional Audio Client Library (3DAudio library) is a group of software routines written to facilitate development of both stand-alone (audio only) and immersive virtual-reality application programs that utilize three-dimensional audio displays. The library is intended to enable the development of three-dimensional audio client application programs by use of a code base common to multiple audio server computers. The 3DAudio library calls vendor-specific audio client libraries and currently supports the AuSIM Gold-Server and Lake Huron audio servers. 3DAudio library routines contain common functions for (1) initiation and termination of a client/audio server session, (2) configuration-file input, (3) positioning functions, (4) coordinate transformations, (5) audio transport functions, (6) rendering functions, (7) debugging functions, and (8) event-list-sequencing functions. The 3DAudio software is written in the C++ programming language and currently operates under the Linux, IRIX, and Windows operating systems.
NALDB: nucleic acid ligand database for small molecules targeting nucleic acid
Kumar Mishra, Subodh; Kumar, Amit
2016-01-01
Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php PMID:26896846
Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng
2012-12-01
Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.
Murayama, Tomonori; Nakajima, Jun
2016-01-01
Anatomical segmentectomies play an important role in oncological lung resection, particularly for ground-glass types of primary lung cancers. This operation can also be applied to metastatic lung tumors deep in the lung. Virtual assisted lung mapping (VAL-MAP) is a novel technique that allows for bronchoscopic multi-spot dye markings to provide “geometric information” to the lung surface, using three-dimensional virtual images. In addition to wedge resections, VAL-MAP has been found to be useful in thoracoscopic segmentectomies, particularly complex segmentectomies, such as combined subsegmentectomies or extended segmentectomies. There are five steps in VAL-MAP-assisted segmentectomies: (I) “standing” stitches along the resection lines; (II) cleaning hilar anatomy; (III) confirming hilar anatomy; (IV) going 1 cm deeper; (V) step-by-step stapling technique. Depending on the anatomy, segmentectomies can be classified into linear (lingular, S6, S2), V- or U-shaped (right S1, left S3, S2b + S3a), and three dimensional (S7, S8, S9, S10) segmentectomies. Particularly three dimensional segmentectomies are challenging in the complexity of stapling techniques. This review focuses on how VAL-MAP can be utilized in segmentectomy, and how this technique can assist the stapling process in even the most challenging ones. PMID:28066675
Pound, Michael P.; French, Andrew P.; Murchie, Erik H.; Pridmore, Tony P.
2014-01-01
Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages. PMID:25332504
A parallel algorithm for viewshed analysis in three-dimensional Digital Earth
NASA Astrophysics Data System (ADS)
Feng, Wang; Gang, Wang; Deji, Pan; Yuan, Liu; Liuzhong, Yang; Hongbo, Wang
2015-02-01
Viewshed analysis, often supported by geographic information systems, is widely used in the three-dimensional (3D) Digital Earth system. Many of the analyzes involve the siting of features and real-timedecision-making. Viewshed analysis is usually performed at a large scale, which poses substantial computational challenges, as geographic datasets continue to become increasingly large. Previous research on viewshed analysis has been generally limited to a single data structure (i.e., DEM), which cannot be used to analyze viewsheds in complicated scenes. In this paper, a real-time algorithm for viewshed analysis in Digital Earth is presented using the parallel computing of graphics processing units (GPUs). An occlusion for each geometric entity in the neighbor space of the viewshed point is generated according to line-of-sight. The region within the occlusion is marked by a stencil buffer within the programmable 3D visualization pipeline. The marked region is drawn with red color concurrently. In contrast to traditional algorithms based on line-of-sight, the new algorithm, in which the viewshed calculation is integrated with the rendering module, is more efficient and stable. This proposed method of viewshed generation is closer to the reality of the virtual geographic environment. No DEM interpolation, which is seen as a computational burden, is needed. The algorithm was implemented in a 3D Digital Earth system (GeoBeans3D) with the DirectX application programming interface (API) and has been widely used in a range of applications.
Three Primary School Students' Cognition about 3D Rotation in a Virtual Reality Learning Environment
ERIC Educational Resources Information Center
Yeh, Andy
2010-01-01
This paper reports on three primary school students' explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students…
3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement
NASA Astrophysics Data System (ADS)
Barba, S.; Fiorillo, F.; De Feo, E.
2013-02-01
The main objective of this paper is to establish a procedural method for measuring and cataloguing antlers through the use of laser scanner and of a 3D reconstruction of complex modeling. The deer's antlers have been used as a test and subjected to capture and measurement. For this purpose multiple data sources techniques have been studied and compared, (also considering low-cost sensors) estimating the accuracy and its errors in order to demonstrate the validity of the process. A further development is the comparison of results with applications of digital photogrammetry, considering also cloud computing software. The study has began with an introduction to sensors, addressing the underlying characteristics of the technology available, the scope and the limits of these applications. We have focused particularly on the "structured light", as the acquisition will be completed through three-dimensional scanners: DAVID and the ARTEC MH. The first is a low-cost sensor, a basic webcam and a linear laser pointer, red coloured, that leads to acquisition of three-dimensional strips. The other one is a hand scanner; even in this case we will explain how to represent a 3D model, with a pipeline that provides data export from the "proprietary" to a "reverse engineering" software. Typically, these are the common steps to the two approaches that have been performed in WRAP format: point sampling, manual and global registration, repair normals, surface editing and texture projection. In fact, after a first and common data processing was done with the use of a software supplied with the equipment, the proto-models thus obtained were treated in Geomagic Studio, which was also chosen to allow the homogenization and standardization of data in order to make a more objective comparison. It is commonplace to observe that the editing of the digital mock-up obtained with the DAVID - which had not yet been upgraded to the 3.5 release at the time of this study - is substantially different. In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Schleich, Jean-Marc; Dillenseger, Jean-Louis; Houyel, Lucile; Almange, Claude; Anderson, Robert H
2009-01-01
Learning embryology remains difficult, since it requires understanding of many complex phenomena. The temporal evolution of developmental events has classically been illustrated using cartoons, which create difficulty in linking spatial and temporal aspects, such correlation being the keystone of descriptive embryology. We synthesized the bibliographic data from recent studies of atrial septal development. On the basis of this synthesis, consensus on the stages of atrial septation as seen in the human heart has been reached by a group of experts in cardiac embryology and pediatric cardiology. This has permitted the preparation of three-dimensional (3D) computer graphic objects for the anatomical components involved in the different stages of normal human atrial septation. We have provided a virtual guide to the process of normal atrial septation, the animation providing an appreciation of the temporal and morphologic events necessary to separate the systemic and pulmonary venous returns. We have shown that our animations of normal human atrial septation increase significantly the teaching of the complex developmental processes involved, and provide a new dynamic for the process of learning.
Accuracy and Repeatability of Trajectory Rod Measurement Using Laser Scanners.
Liscio, Eugene; Guryn, Helen; Stoewner, Daniella
2017-12-22
Three-dimensional (3D) technologies contribute greatly to bullet trajectory analysis and shooting reconstruction. There are few papers which address the errors associated with utilizing laser scanning for bullet trajectory documentation. This study examined the accuracy and precision of laser scanning for documenting trajectory rods in drywall for angles between 25° and 90°. The inherent error range of 0.02°-2.10° was noted while the overall error for laser scanning ranged between 0.04° and 1.98°. The inter- and intraobserver errors for trajectory rod placement and virtual trajectory marking showed that the range of variation for rod placement was between 0.1°-1° in drywall and 0.05°-0.5° in plywood. Virtual trajectory marking accuracy tests showed that 75% of data values were below 0.91° and 0.61° on azimuth and vertical angles, respectively. In conclusion, many contributing factors affect bullet trajectory analysis, and the use of 3D technologies can aid in reduction of errors associated with documentation. © 2017 American Academy of Forensic Sciences.