Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Virtual 3d City Modeling: Techniques and Applications
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.
A specification of 3D manipulation in virtual environments
NASA Technical Reports Server (NTRS)
Su, S. Augustine; Furuta, Richard
1994-01-01
In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.
3D Virtual Reality Check: Learner Engagement and Constructivist Theory
ERIC Educational Resources Information Center
Bair, Richard A.
2013-01-01
The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…
Zhang, Hui-Rong; Yin, Le-Feng; Liu, Yan-Li; Yan, Li-Yi; Wang, Ning; Liu, Gang; An, Xiao-Li; Liu, Bin
2018-04-01
The aim of this study is to build a digital dental model with cone beam computed tomography (CBCT), to fabricate a virtual model via 3D printing, and to determine the accuracy of 3D printing dental model by comparing the result with a traditional dental cast. CBCT of orthodontic patients was obtained to build a digital dental model by using Mimics 10.01 and Geomagic studio software. The 3D virtual models were fabricated via fused deposition modeling technique (FDM). The 3D virtual models were compared with the traditional cast models by using a Vernier caliper. The measurements used for comparison included the width of each tooth, the length and width of the maxillary and mandibular arches, and the length of the posterior dental crest. 3D printing models had higher accuracy compared with the traditional cast models. The results of the paired t-test of all data showed that no statistically significant difference was observed between the two groups (P>0.05). Dental digital models built with CBCT realize the digital storage of patients' dental condition. The virtual dental model fabricated via 3D printing avoids traditional impression and simplifies the clinical examination process. The 3D printing dental models produced via FDM show a high degree of accuracy. Thus, these models are appropriate for clinical practice.
ERIC Educational Resources Information Center
Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.
2012-01-01
We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.
Thali, Michael J; Braun, Marcel; Dirnhofer, Richard
2003-11-26
Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.
Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
D Visualization for Virtual Museum Development
NASA Astrophysics Data System (ADS)
Skamantzari, M.; Georgopoulos, A.
2016-06-01
The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.
Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups
ERIC Educational Resources Information Center
Casas, Lluís; Estop, Euge`nia
2015-01-01
Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…
Abdoli-Eramaki, Mohammad; Stevenson, Joan M; Agnew, Michael J; Kamalzadeh, Amin
2009-04-01
The purpose of this study was to validate a 3D dynamic virtual model for lifting tasks against a validated link segment model (LSM). A face validation study was conducted by collecting x, y, z coordinate data and using them in both virtual and LSM models. An upper body virtual model was needed to calculate the 3D torques about human joints for use in simulated lifting styles and to estimate the effect of external mechanical devices on human body. Firstly, the model had to be validated to be sure it provided accurate estimates of 3D moments in comparison to a previously validated LSM. Three synchronised Fastrak units with nine sensors were used to record data from one male subject who completed dynamic box lifting under 27 different load conditions (box weights (3), lifting techniques (3) and rotations (3)). The external moments about three axes of L4/L5 were compared for both models. A pressure switch on the box was used to denote the start and end of the lift. An excellent agreement [image omitted] was found between the two models for dynamic lifting tasks, especially for larger moments in flexion and extension. This virtual model was considered valid for use in a complete simulation of the upper body skeletal system. This biomechanical virtual model of the musculoskeletal system can be used by researchers and practitioners to give a better tool to study the causes of LBP and the effect of intervention strategies, by permitting the researcher to see and control a virtual subject's motions.
CaveCAD: a tool for architectural design in immersive virtual environments
NASA Astrophysics Data System (ADS)
Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo
2014-02-01
Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.
Algorithms for extraction of structural attitudes from 3D outcrop models
NASA Astrophysics Data System (ADS)
Duelis Viana, Camila; Endlein, Arthur; Ademar da Cruz Campanha, Ginaldo; Henrique Grohmann, Carlos
2016-05-01
The acquisition of geological attitudes on rock cuts using traditional field compass survey can be a time consuming, dangerous, or even impossible task depending on the conditions and location of outcrops. The importance of this type of data in rock-mass classifications and structural geology has led to the development of new techniques, in which the application of photogrammetric 3D digital models has had an increasing use. In this paper we present two algorithms for extraction of attitudes of geological discontinuities from virtual outcrop models: ply2atti and scanline, implemented with the Python programming language. The ply2atti algorithm allows for the virtual sampling of planar discontinuities appearing on the 3D model as individual exposed surfaces, while the scanline algorithm allows the sampling of discontinuities (surfaces and traces) along a virtual scanline. Application to digital models of a simplified test setup and a rock cut demonstrated a good correlation between the surveys undertaken using traditional field compass reading and virtual sampling on 3D digital models.
NASA Astrophysics Data System (ADS)
Junk, S.
2016-08-01
Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.
The Virtual Museum of Minerals and Molecules: Molecular Visualization in a Virtual Hands-On Museum
ERIC Educational Resources Information Center
Barak, Phillip; Nater, Edward A.
2005-01-01
The Virtual Museum of Minerals and Molecules (VMMM) is a web-based resource presenting interactive, 3-D, research-grade molecular models of more than 150 minerals and molecules of interest to chemical, earth, plant, and environmental sciences. User interactivity with the 3-D display allows models to be rotated, zoomed, and specific regions of…
3D virtual environment of Taman Mini Indonesia Indah in a web
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.
2018-05-01
Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.
Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)
2017-01-01
created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11
Application of 3d Model of Cultural Relics in Virtual Restoration
NASA Astrophysics Data System (ADS)
Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.
2018-04-01
In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.
A Virtual Campus Based on Human Factor Engineering
ERIC Educational Resources Information Center
Yang, Yuting; Kang, Houliang
2014-01-01
Three Dimensional or 3D virtual reality has become increasingly popular in many areas, especially in building a digital campus. This paper introduces a virtual campus, which is based on a 3D model of The Tourism and Culture College of Yunnan University (TCYU). Production of the virtual campus was aided by Human Factor and Ergonomics (HF&E), an…
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
Virtually fabricated guide for placement of the C-tube miniplate.
Paek, Janghyun; Jeong, Do-Min; Kim, Yong; Kim, Seong-Hun; Chung, Kyu-Rhim; Nelson, Gerald
2014-05-01
This paper introduces a virtually planned and stereolithographically fabricated guiding system that will allow the clinician to plan carefully for the best location of the device and to achieve an accurate position without complications. The scanned data from preoperative dental casts were edited to obtain preoperative 3-dimensional (3D) virtual models of the dentition. After the 3D virtual models were repositioned, the 3D virtual surgical guide was fabricated. A surgical guide was created onscreen, and then these virtual guides were materialized into real ones using the stereolithographic technique. Whereas the previously described guide required laboratory work to be performed by the orthodontist, our technique is more convenient because the laboratory work is done remotely by computer-aided design/computer-aided manufacturing technology. Because the miniplate is firmly held in place as the patient holds his or her mandibular teeth against the occlusal pad of the surgical guide, there is no risk that the miniscrews can slide on the bone surface during placement. The software program (2.5-dimensional software) in this study combines 2-dimensional cephalograms with 3D virtual dental models. This software is an effective and efficient alternative to 3D software when 3D computed tomography data are not available. To confidently and safely place a miniplate with screw fixation, a simple customized guide for an orthodontic miniplate was introduced. The use of a custom-made, rigid guide when placing miniplates will minimize complications such as vertical mislocation or slippage of the miniplate during placement. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B
2018-02-01
We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. 3D models of commercially available cannula were positioned in 3D models of the airway. In study (1), a cohort that underwent tracheostomy between 2013 and 2015 was selected (n = 26). The cannula was virtually placed in the airway in the pre-operative CT scan and its position was compared to the cannula position on post-operative CT scans. In study (2), a cohort with neuromuscular disease (n = 14) was analyzed. Virtual cannula placing was performed in CT scans and tested if problems could be anticipated. Finally (3), for a patient with Duchenne muscular dystrophy and complications of conventional tracheostomy cannula, a patient-specific cannula was 3D designed, fabricated, and placed. (1) The 3D planned and post-operative tracheostomy position differed significantly. (2) Three groups of patients were identified: (A) normal anatomy; (B) abnormal anatomy, commercially available cannula fits; and (C) abnormal anatomy, custom-made cannula, may be necessary. (3) The position of the custom-designed cannula was optimal and the trachea healed. Virtual planning of the tracheostomy did not correlate with actual cannula position. Identifying patients with abnormal airway anatomy in whom commercially available cannula cannot be optimally positioned is advantageous. Patient-specific cannula design based on 3D virtualization of the airway was beneficial in a patient with abnormal airway anatomy.
Patel, Preeti; Singh, Avineesh; Patel, Vijay K; Jain, Deepak K; Veerasamy, Ravichandran; Rajak, Harish
2016-01-01
Histone deacetylase (HDAC) inhibitors can reactivate gene expression and inhibit the growth and survival of cancer cells. To identify the important pharmacophoric features and correlate 3Dchemical structure with biological activity using 3D-QSAR and Pharmacophore modeling studies. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. Pharmacophore hypothesis represents the 3D arrangement of molecular features necessary for activity. A series of 55 compounds with wellassigned HDAC inhibitory activity were used for 3D-QSAR model development. Best 3D-QSAR model, which is a five partial least square (PLS) factor model with good statistics and predictive ability, acquired Q2 (0.7293), R2 (0.9811), cross-validated coefficient rcv 2=0.9807 and R2 pred=0.7147 with low standard deviation (0.0952). Additionally, the selected pharmacophore model DDRRR.419 was used as a 3D query for virtual screening against the ZINC database. In the virtual screening workflow, docking studies (HTVS, SP and XP) were carried out by selecting multiple receptors (PDB ID: 1T69, 1T64, 4LXZ, 4LY1, 3MAX, 2VQQ, 3C10, 1W22). Finally, six compounds were obtained based on high scoring function (dock score -11.2278-10.2222 kcal/mol) and diverse structures. The structure activity correlation was established using virtual screening, docking, energetic based pharmacophore modelling, pharmacophore, atom based 3D QSAR models and their validation. The outcomes of these studies could be further employed for the design of novel HDAC inhibitors for anticancer activity.
Towards Automatic Processing of Virtual City Models for Simulations
NASA Astrophysics Data System (ADS)
Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2016-10-01
Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.
ERIC Educational Resources Information Center
Stull, Andrew T.; Hegarty, Mary
2016-01-01
This study investigated the development of representational competence among organic chemistry students by using 3D (concrete and virtual) models as aids for teaching students to translate between multiple 2D diagrams. In 2 experiments, students translated between different diagrams of molecules and received verbal feedback in 1 of the following 3…
A Head in Virtual Reality: Development of A Dynamic Head and Neck Model
ERIC Educational Resources Information Center
Nguyen, Ngan; Wilson, Timothy D.
2009-01-01
Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…
Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime
2018-05-02
This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.
Image-based 3D reconstruction and virtual environmental walk-through
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Fang, Lixiong; Luo, Ying
2001-09-01
We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.
Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.
Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N
2016-01-01
Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.
Mendez, Bernardino M; Chiodo, Michael V; Patel, Parit A
2015-07-01
Virtual surgical planning using three-dimensional (3D) printing technology has improved surgical efficiency and precision. A limitation to this technology is that production of 3D surgical models requires a third-party source, leading to increased costs (up to $4000) and prolonged assembly times (averaging 2-3 weeks). The purpose of this study is to evaluate the feasibility, cost, and production time of customized skull models created by an "in-office" 3D printer for craniofacial reconstruction. Two patients underwent craniofacial reconstruction with the assistance of "in-office" 3D printing technology. Three-dimensional skull models were created from a bioplastic filament with a 3D printer using computed tomography (CT) image data. The cost and production time for each model were measured. For both patients, a customized 3D surgical model was used preoperatively to plan split calvarial bone grafting and intraoperatively to more efficiently and precisely perform the craniofacial reconstruction. The average cost for surgical model production with the "in-office" 3D printer was $25 (cost of bioplastic materials used to create surgical model) and the average production time was 14 hours. Virtual surgical planning using "in office" 3D printing is feasible and allows for a more cost-effective and less time consuming method for creating surgical models and guides. By bringing 3D printing to the office setting, we hope to improve intraoperative efficiency, surgical precision, and overall cost for various types of craniofacial and reconstructive surgery.
3D-Lab: a collaborative web-based platform for molecular modeling.
Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas
2016-09-01
The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markidis, S.; Rizwan, U.
The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts
ERIC Educational Resources Information Center
Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.
2005-01-01
The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…
3D virtual character reconstruction from projections: a NURBS-based approach
NASA Astrophysics Data System (ADS)
Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.
2004-05-01
This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.
Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling
NASA Astrophysics Data System (ADS)
Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.
2016-06-01
The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.
Comparison of Actual Surgical Outcomes and 3D Surgical Simulations
Tucker, Scott; Cevidanes, Lucia; Styner, Martin; Kim, Hyungmin; Reyes, Mauricio; Proffit, William; Turvey, Timothy
2009-01-01
Purpose The advent of imaging software programs have proved to be useful for diagnosis, treatment planning, and outcome measurement, but precision of 3D surgical simulation still needs to be tested. This study was conducted to determine if the virtual surgery performed on 3D models constructed from Cone-beam CT (CBCT) can correctly simulate the actual surgical outcome and to validate the ability of this emerging technology to recreate the orthognathic surgery hard tissue movements in 3 translational and 3 rotational planes of space. Methods Construction of pre- and post-surgery 3D models from CBCTs of 14 patients who had combined maxillary advancement and mandibular setback surgery and 6 patients who had one-piece maxillary advancement surgery was performed. The post-surgery and virtually simulated surgery 3D models were registered at the cranial base to quantify differences between simulated and actual surgery models. Hotelling T-test were used to assess the differences between simulated and actual surgical outcomes. Results For all anatomic regions of interest, there was no statistically significant difference between the simulated and the actual surgical models. The right lateral ramus was the only region that showed a statistically significant, but small difference when comparing two- and one-jaw surgeries. Conclusions Virtual surgical methods were reliably reproduced, oral surgery residents could benefit from virtual surgical training, and computer simulation has the potential to increase predictability in the operating room. PMID:20591553
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng
2010-10-01
The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.
Ren, Ji-Xia; Li, Cheng-Ping; Zhou, Xiu-Ling; Cao, Xue-Song; Xie, Yong
2017-08-22
Myeloid cell leukemia-1 (Mcl-1) has been a validated and attractive target for cancer therapy. Over-expression of Mcl-1 in many cancers allows cancer cells to evade apoptosis and contributes to the resistance to current chemotherapeutics. Here, we identified new Mcl-1 inhibitors using a multi-step virtual screening approach. First, based on two different ligand-receptor complexes, 20 pharmacophore models were established by simultaneously using 'Receptor-Ligand Pharmacophore Generation' method and manual build feature method, and then carefully validated by a test database. Then, pharmacophore-based virtual screening (PB-VS) could be performed by using the 20 pharmacophore models. In addition, docking study was used to predict the possible binding poses of compounds, and the docking parameters were optimized before performing docking-based virtual screening (DB-VS). Moreover, a 3D QSAR model was established by applying the 55 aligned Mcl-1 inhibitors. The 55 inhibitors sharing the same scaffold were docked into the Mcl-1 active site before alignment, then the inhibitors with possible binding conformations were aligned. For the training set, the 3D QSAR model gave a correlation coefficient r 2 of 0.996; for the test set, the correlation coefficient r 2 was 0.812. Therefore, the developed 3D QSAR model was a good model, which could be applied for carrying out 3D QSAR-based virtual screening (QSARD-VS). After the above three virtual screening methods orderly filtering, 23 potential inhibitors with novel scaffolds were identified. Furthermore, we have discussed in detail the mapping results of two potent compounds onto pharmacophore models, 3D QSAR model, and the interactions between the compounds and active site residues.
Enhanced LOD Concepts for Virtual 3d City Models
NASA Astrophysics Data System (ADS)
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
NASA Astrophysics Data System (ADS)
Simonetto, E.; Froment, C.; Labergerie, E.; Ferré, G.; Séchet, B.; Chédorge, H.; Cali, J.; Polidori, L.
2013-07-01
Terrestrial Laser Scanning (TLS), 3-D modeling and its Web visualization are the three key steps needed to perform storage and grant-free and wide access to cultural heritage, as highlighted in many recent examples. The goal of this study is to set up 3-D Web resources for "virtually" visiting the exterior of the Abbaye de l'Epau, an old French abbey which has both a rich history and delicate architecture. The virtuality is considered in two ways: the flowing navigation in a virtual reality environment around the abbey and a game activity using augmented reality. First of all, the data acquisition consists in GPS and tacheometry survey, terrestrial laser scanning and photography acquisition. After data pre-processing, the meshed and textured 3-D model is generated using 3-D Reshaper commercial software. The virtual reality visit and augmented reality animation are then created using Unity software. This work shows the interest of such tools in bringing out the regional cultural heritage and making it attractive to the public.
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
Value of 3D printing for the comprehension of surgical anatomy.
Marconi, Stefania; Pugliese, Luigi; Botti, Marta; Peri, Andrea; Cavazzi, Emma; Latteri, Saverio; Auricchio, Ferdinando; Pietrabissa, Andrea
2017-10-01
In a preliminary experience, we claimed the potential value of 3D printing technology for pre-operative counseling and surgical planning. However, no objective analysis has ever assessed its additional benefit in transferring anatomical information from radiology to final users. We decided to validate the pre-operative use of 3D-printed anatomical models in patients with solid organs' diseases as a new tool to deliver morphological information. Fifteen patients scheduled for laparoscopic splenectomy, nephrectomy, or pancreatectomy were selected and, for each, a full-size 3D virtual anatomical object was reconstructed from a contrast-enhanced MDCT (Multiple Detector Computed Tomography) and then prototyped using a 3D printer. After having carefully evaluated-in a random sequence-conventional contrast MDCT scans, virtual 3D reconstructions on a flat monitor, and 3D-printed models of the same anatomy for each selected case, thirty subjects with different expertise in radiological imaging (10 medical students, 10 surgeons and 10 radiologists) were administered a multiple-item questionnaire. Crucial issues for the anatomical understanding and the pre-operative planning of the scheduled procedure were addressed. The visual and tactile inspection of 3D models allowed the best anatomical understanding, with faster and clearer comprehension of the surgical anatomy. As expected, less experienced medical students perceived the highest benefit (53.9% ± 4.14 of correct answers with 3D-printed models, compared to 53.4 % ± 4.6 with virtual models and 45.5% ± 4.6 with MDCT), followed by surgeons and radiologists. The average time spent by participants in 3D model assessing was shorter (60.67 ± 25.5 s) than the one of the corresponding virtual 3D reconstruction (70.8 ± 28.18 s) or conventional MDCT scan (127.04 ± 35.91 s). 3D-printed models help to transfer complex anatomical information to clinicians, resulting useful in the pre-operative planning, for intra-operative navigation and for surgical training purposes.
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier
2016-08-01
The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed on desktop 3D printers with good quality and accurate representation of the virtual 3D models. We recommend using BP segmentation with either MRA or bSSFP source datasets to create virtual 3D models for 3D printing. Desktop 3D printers can offer good quality printed models with accurate representation of anatomic detail.
A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools
ERIC Educational Resources Information Center
Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min
2010-01-01
The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…
Virtual Reality and Learning: Where Is the Pedagogy?
ERIC Educational Resources Information Center
Fowler, Chris
2015-01-01
The aim of this paper was to build upon Dalgarno and Lee's model or framework of learning in three-dimensional (3-D) virtual learning environments (VLEs) and to extend their road map for further research in this area. The enhanced model shares the common goal with Dalgarno and Lee of identifying the learning benefits from using 3-D VLEs. The…
Villa, C; Olsen, K B; Hansen, S H
2017-09-01
Post-mortem CT scanning (PMCT) has been introduced at several forensic medical institutions many years ago and has proved to be a useful tool. 3D models of bones, skin, internal organs and bullet paths can rapidly be generated using post-processing software. These 3D models reflect the individual physiognomics and can be used to create whole-body 3D virtual animations. In such way, virtual reconstructions of the probable ante-mortem postures of victims can be constructed and contribute to understand the sequence of events. This procedure is demonstrated in two victims of gunshot injuries. Case #1 was a man showing three perforating gunshot wounds, who died due to the injuries of the incident. Whole-body PMCT was performed and 3D reconstructions of bones, relevant internal organs and bullet paths were generated. Using 3ds Max software and a human anatomy 3D model, a virtual animated body was built and probable ante-mortem postures visualized. Case #2 was a man presenting three perforating gunshot wounds, who survived the incident: one in the left arm and two in the thorax. Only CT scans of the thorax, abdomen and the injured arm were provided by the hospital. Therefore, a whole-body 3D model reflecting the anatomical proportions of the patient was made combining the actual bones of the victim with those obtained from the human anatomy 3D model. The resulted 3D model was used for the animation process. Several probable postures were also visualized in this case. It has be shown that in Case #1 the lesions and the bullet path were not consistent with an upright standing position; instead, the victim was slightly bent forward, i.e. he was sitting or running when he was shot. In Case #2, one of the bullets could have passed through the arm and continued into the thorax. In conclusion, specialized 3D modelling and animation techniques allow for the reconstruction of ante-mortem postures based on both PMCT and clinical CT. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ren, Yilong; Duan, Xitong; Wu, Lei; He, Jin; Xu, Wu
2017-06-01
With the development of the “VR+” era, the traditional virtual assembly system of power equipment has been unable to satisfy our growing needs. In this paper, based on the analysis of the traditional virtual assembly system of electric power equipment and the application of VR technology in the virtual assembly system of electric power equipment in our country, this paper puts forward the scheme of establishing the virtual assembly system of power equipment: At first, we should obtain the information of power equipment, then we should using OpenGL and multi texture technology to build 3D solid graphics library. After the completion of three-dimensional modeling, we can use the dynamic link library DLL package three-dimensional solid graphics generation program to realize the modularization of power equipment model library and power equipment model library generated hidden algorithm. After the establishment of 3D power equipment model database, we set up the virtual assembly system of 3D power equipment to separate the assembly operation of the power equipment from the space. At the same time, aiming at the deficiency of the traditional gesture recognition algorithm, we propose a gesture recognition algorithm based on improved PSO algorithm for BP neural network data glove. Finally, the virtual assembly system of power equipment can really achieve multi-channel interaction function.
[Development of a virtual model of fibro-bronchoscopy].
Solar, Mauricio; Ducoing, Eugenio
2011-09-01
A virtual model of fibro-bronchoscopy is reported. The virtual model represents in 3D the trachea and the bronchi creating a virtual world of the bronchial tree. The bronchoscope is modeled to look over the bronchial tree imitating the displacement and rotation of the real bronchoscope. The parameters of the virtual model were gradually adjusted according to expert opinion and allowed the training of specialists with a virtual bronchoscope of great realism. The virtual bronchial tree provides clues of reality regarding the movement of the bronchoscope, creating the illusion that the virtual instrument is behaving as the real one with all the benefits in costs that this means.
Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques
ERIC Educational Resources Information Center
Wang, Yushun; Zhuang, Yueting
2008-01-01
Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…
Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.
Villarrubia, J S; Tondare, V N; Vladár, A E
2016-01-01
The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.
Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram
2006-04-01
The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.
DHM simulation in virtual environments: a case-study on control room design.
Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G
2012-01-01
This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Measurable realistic image-based 3D mapping
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.
2011-12-01
Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.
Virtual Reality Website of Indonesia National Monument and Its Environment
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.
2017-02-01
National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models
NASA Astrophysics Data System (ADS)
Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain
2014-05-01
The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail. The 3D model integrated into a GIS is now a precious means of communication for the valuation of the site. Accessible to all, including to the distant people, he allows discover the castle and his history in an educational and relevant way. From an archaeological point of view, the 3D model brings an overall view and a backward movement on the constitution of the site, which a 2D document cannot easily offer. The 3D navigation and the integration of 2D data in the model allow analyze vestiges in another way, contributing to the faster establishment of new hypotheses. Complementary to other methods already exploited in archaeology, the analysis by the 3D vision is, for the scientists, a significant saving of time which they can so dedicate to the more thorough study of certain put aside hypotheses. In parallel, we created several panoramas, and set up a virtual and interactive visit of the site. In the optics to perpetuate this project, and to offer to the future users the ways to continue and to update this study, we tested and set up the methodologies of processing. We were so able to release procedures clear, orderly and applicable as well to the case of Engelbourg as to other similar studies. At least, some hypotheses permits to reconstruct virtually first versions of the original state of the castle.
Merema, B J; Kraeima, J; Ten Duis, K; Wendt, K W; Warta, R; Vos, E; Schepers, R H; Witjes, M J H; IJpma, F F A
2017-11-01
An innovative procedure for the development of 3D patient-specific implants with drilling guides for acetabular fracture surgery is presented. By using CT data and 3D surgical planning software, a virtual model of the fractured pelvis was created. During this process the fracture was virtually reduced. Based on the reduced fracture model, patient-specific titanium plates including polyamide drilling guides were designed, 3D printed and milled for intra-operative use. One of the advantages of this procedure is that the personalised plates could be tailored to both the shape of the pelvis and the type of fracture. The optimal screw directions and sizes were predetermined in the 3D model. The virtual plan was translated towards the surgical procedure by using the surgical guides and patient-specific osteosynthesis. Besides the description of the newly developed multi-disciplinary workflow, a clinical case example is presented to demonstrate that this technique is feasible and promising for the operative treatment of complex acetabular fractures. Copyright © 2017 Elsevier Ltd. All rights reserved.
X3DOM as Carrier of the Virtual Heritage
NASA Astrophysics Data System (ADS)
Jung, Y.; Behr, J.; Graf, H.
2011-09-01
Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.
Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization
NASA Astrophysics Data System (ADS)
Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.
2016-06-01
The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.
A 3D virtual reality simulator for training of minimally invasive surgery.
Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin
2014-01-01
For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.
ERIC Educational Resources Information Center
Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.
2013-01-01
This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…
Wu, Xin-Bao; Wang, Jun-Qiang; Zhao, Chun-Peng; Sun, Xu; Shi, Yin; Zhang, Zi-An; Li, Yu-Neng; Wang, Man-Yi
2015-02-20
Old pelvis fractures are among the most challenging fractures to treat because of their complex anatomy, difficult-to-access surgical sites, and the relatively low incidence of such cases. Proper evaluation and surgical planning are necessary to achieve the pelvic ring symmetry and stable fixation of the fracture. The goal of this study was to assess the use of three-dimensional (3D) printing techniques for surgical management of old pelvic fractures. First, 16 dried human cadaveric pelvises were used to confirm the anatomical accuracy of the 3D models printed based on radiographic data. Next, nine clinical cases between January 2009 and April 2013 were used to evaluate the surgical reconstruction based on the 3D printed models. The pelvic injuries were all type C, and the average time from injury to reconstruction was 11 weeks (range: 8-17 weeks). The workflow consisted of: (1) Printing patient-specific bone models based on preoperative computed tomography (CT) scans, (2) virtual fracture reduction using the printed 3D anatomic template, (3) virtual fracture fixation using Kirschner wires, and (4) preoperatively measuring the osteotomy and implant position relative to landmarks using the virtually defined deformation. These models aided communication between surgical team members during the procedure. This technique was validated by comparing the preoperative planning to the intraoperative procedure. The accuracy of the 3D printed models was within specification. Production of a model from standard CT DICOM data took 7 hours (range: 6-9 hours). Preoperative planning using the 3D printed models was feasible in all cases. Good correlation was found between the preoperative planning and postoperative follow-up X-ray in all nine cases. The patients were followed for 3-29 months (median: 5 months). The fracture healing time was 9-17 weeks (mean: 10 weeks). No delayed incision healing, wound infection, or nonunions occurred. The results were excellent in two cases, good in five, and poor in two based on the Majeed score. The 3D printing planning technique for pelvic surgery was successfully integrated into a clinical workflow to improve patient-specific preoperative planning by providing a visual and haptic model of the injury and allowing patient-specific adaptation of each osteosynthesis implant to the virtually reduced pelvis.
Parchi, Paolo Domenico; Ferrari, Vincenzo; Piolanti, Nicola; Andreani, Lorenzo; Condino, Sara; Evangelisti, Gisberto; Lisanti, Michele
2013-09-01
Each year approximately 1 million total hip replacements (THR) are performed worldwide. A percentage of failure due to surgical approach and imprecise implant placement still exists. These result in several serious complications. We propose an approach to plan, to simulate, and to assist prosthesis implantation for difficult cases of THR based on 3-D virtual models, generated by segmenting patients' CT images, 3-D solid models, obtained by rapid prototyping (RP), and virtual procedure simulation. We carried out 8 THR with the aid of 3-D reconstruction and RP. After each procedure a questionnaire was submitted to the surgeon to assess the perceived added value of the technology. In all cases, the surgeon evaluated the 3-D model as useful in order to perform the planning. The clinical results showed a mean increase in the Harris Hip Score of about 42.5 points. The mean time of prototyping was 7.3 hours, (min 3.5 hours, max 9.3 hours). The mean surgery time was 65 minutes (min 50 minutes, max 88 minutes). Our study suggests that meticulous preoperative planning is necessary in front of a great aberration of the joint and in absence of normal anatomical landmarks, CT scan is mandatory, and 3-D reconstruction with solid model is useful.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang
2013-09-01
Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.
[Research and application of computer-aided technology in restoration of maxillary defect].
Cheng, Xiaosheng; Liao, Wenhe; Hu, Qingang; Wang, Qian; Dai, Ning
2008-08-01
This paper presents a new method of designing restoration model of maxillectomy defect through Computer aided technology. Firstly, 3D maxillectomy triangle mesh model is constructed from Helical CT data. Secondly, the triangle mesh model is transformed into initial computer-aided design (CAD) model of maxillectomy through reverse engineering software. Thirdly, the 3D virtual restoration model of maxillary defect is obtained after designing and adjusting the initial CAD model through CAD software according to the patient's practical condition. Therefore, the 3D virtual restoration can be fitted very well with the broken part of maxilla. The exported design data can be manufactured using rapid prototyping technology and foundry technology. Finally, the result proved that this method is effective and feasible.
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
A desktop system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Filippov, S. V.
2017-03-01
Global morphometric models can be useful for earth and planetary studies. Virtual globes - programs implementing interactive three-dimensional (3D) models of planets - are increasingly used in geo- and planetary sciences. We describe the development of a desktop system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15'-gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. For two celestial bodies, we derived global digital models of several morphometric attributes, such as horizontal curvature, vertical curvature, minimal curvature, maximal curvature, and catchment area. To develop the system, we used Blender, the free open-source software for 3D modeling and visualization. First, a 3D sphere model was generated. Second, the global morphometric maps were imposed to the sphere surface as textures. Finally, the real-time 3D graphics Blender engine was used to implement rotation and zooming of the globes. The testing of the developed system demonstrated its good performance. Morphometric globes clearly represent peculiarities of planetary topography, according to the physical and mathematical sense of a particular morphometric variable.
Tetsworth, Kevin; Block, Steve; Glatt, Vaida
2017-01-01
3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. PMID:28220752
Tetsworth, Kevin; Block, Steve; Glatt, Vaida
2017-01-01
3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. © The Authors, published by EDP Sciences, 2017.
Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian
2016-05-01
Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
The electronic-commerce-oriented virtual merchandise model
NASA Astrophysics Data System (ADS)
Fang, Xiaocui; Lu, Dongming
2004-03-01
Electronic commerce has been the trend of commerce activities. Providing with Virtual Reality interface, electronic commerce has better expressing capacity and interaction means. But most of the applications of virtual reality technology in EC, 3D model is only the appearance description of merchandises. There is almost no information concerned with commerce information and interaction information. This resulted in disjunction of virtual model and commerce information. So we present Electronic Commerce oriented Virtual Merchandise Model (ECVMM), which combined a model with commerce information, interaction information and figure information of virtual merchandise. ECVMM with abundant information provides better support to information obtainment and communication in electronic commerce.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-07-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-01-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760
NASA Astrophysics Data System (ADS)
Valencia, J.; Muñoz-Nieto, A.; Rodriguez-Gonzalvez, P.
2015-02-01
3D virtual modeling, visualization, dissemination and management of urban areas is one of the most exciting challenges that must face geomatics in the coming years. This paper aims to review, compare and analyze the new technologies, policies and software tools that are in progress to manage urban 3D information. It is assumed that the third dimension increases the quality of the model provided, allowing new approaches to urban planning, conservation and management of architectural and archaeological areas. Despite the fact that displaying 3D urban environments is an issue nowadays solved, there are some challenges to be faced by geomatics in the coming future. Displaying georeferenced linked information would be considered the first challenge. Another challenge to face is to improve the technical requirements if this georeferenced information must be shown in real time. Are there available software tools ready for this challenge? Are they useful to provide services required in smart cities? Throughout this paper, many practical examples that require 3D georeferenced information and linked data will be shown. Computer advances related to 3D spatial databases and software that are being developed to convert rendering virtual environment to a new enriched environment with linked information will be also analyzed. Finally, different standards that Open Geospatial Consortium has assumed and developed regarding the three-dimensional geographic information will be reviewed. Particular emphasis will be devoted on KML, LandXML, CityGML and the new IndoorGML.
Virtual reality and 3D visualizations in heart surgery education.
Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver
2002-01-01
Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.
NASA Astrophysics Data System (ADS)
Ding, Yea-Chung
2010-11-01
In recent years national parks worldwide have introduced online virtual tourism, through which potential visitors can search for tourist information. Most virtual tourism websites are a simulation of an existing location, usually composed of panoramic images, a sequence of hyperlinked still or video images, and/or virtual models of the actual location. As opposed to actual tourism, a virtual tour is typically accessed on a personal computer or an interactive kiosk. Using modern Digital Earth techniques such as high resolution satellite images, precise GPS coordinates and powerful 3D WebGIS, however, it's possible to create more realistic scenic models to present natural terrain and man-made constructions in greater detail. This article explains how to create an online scientific reality tourist guide for the Jinguashi Gold Ecological Park at Jinguashi in northern Taiwan, China. This project uses high-resolution Formosat 2 satellite images and digital aerial images in conjunction with DTM to create a highly realistic simulation of terrain, with the addition of 3DMAX to add man-made constructions and vegetation. Using this 3D Geodatabase model in conjunction with INET 3D WebGIS software, we have found Digital Earth concept can greatly improve and expand the presentation of traditional online virtual tours on the websites.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform
NASA Astrophysics Data System (ADS)
Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang
2018-06-01
This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.
The Texas-Indiana Virtual STAR Center: Zebrafish Models for Developmental Toxicity Screening
The Texas-Indiana Virtual STAR Center: Zebrafish Models for Developmental Toxicity Screening (Presented by Maria Bondesson Bolin, Ph.D, University of Houston, Center for Nuclear Receptors and Cell Signaling) (3/22/2012)
Integration of the virtual 3D model of a control system with the virtual controller
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2015-11-01
Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.
Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T
2003-08-01
An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.
Crossing the Virtual World Barrier with OpenAvatar
NASA Technical Reports Server (NTRS)
Joy, Bruce; Kavle, Lori; Tan, Ian
2012-01-01
There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.
Zhang, Wen; Qiu, Kai-Xiong; Yu, Fang; Xie, Xiao-Guang; Zhang, Shu-Qun; Chen, Ya-Juan; Xie, Hui-Ding
2017-10-01
B-Raf kinase has been identified as an important target in recent cancer treatment. In order to discover structurally diverse and novel B-Raf inhibitors (BRIs), a virtual screening of BRIs against ZINC database was performed by using a combination of pharmacophore modelling, molecular docking, 3D-QSAR model and binding free energy (ΔG bind ) calculation studies in this work. After the virtual screening, six promising hit compounds were obtained, which were then tested for inhibitory activities of A375 cell lines. In the result, five hit compounds show good biological activities (IC 50 <50μM). The present method of virtual screening can be applied to find structurally diverse inhibitors, and the obtained five structurally diverse compounds are expected to develop novel BRIs. Copyright © 2017. Published by Elsevier Ltd.
Explore the virtual side of earth science
,
1998-01-01
Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).
Hayashi, Kazuo; Chung, Onejune; Park, Seojung; Lee, Seung-Pyo; Sachdeva, Rohit C L; Mizoguchi, Itaru
2015-03-01
Virtual 3-dimensional (3D) models obtained by scanning of physical casts have become an alternative to conventional dental cast analysis in orthodontic treatment. If the precision (reproducibility) of virtual 3D model analysis can be further improved, digital orthodontics could be even more widely accepted. The purpose of this study was to clarify the influence of "standardization" of the target points for dental cast analysis using virtual 3D models. Physical plaster models were also measured to obtain additional information. Five sets of dental casts were used. The dental casts were scanned with R700 (3Shape, Copenhagen, Denmark) and REXCAN DS2 3D (Solutionix, Seoul, Korea) scanners. In this study, 3 system and software packages were used: SureSmile (OraMetrix, Richardson, Tex), Rapidform (Inus, Seoul, Korea), and I-DEAS (SDRC, Milford, Conn). Without standardization, the maximum differences were observed between the SureSmile software and the Rapidform software (0.39 mm ± 0.07). With standardization, the maximum differences were observed between the SureSmile software and measurements with a digital caliper (0.099 mm ± 0.01), and this difference was significantly greater (P <0.05) than the 2 other mean difference values. Furthermore, the results of this study showed that the mean differences "WITH" standardization were significantly lower than those "WITHOUT" standardization for all systems, software packages, or methods. The results showed that elimination of the influence of usability or habituation is important for improving the reproducibility of dental cast analysis. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Journey to the centre of the cell: Virtual reality immersion into scientific data.
Johnston, Angus P R; Rae, James; Ariotti, Nicholas; Bailey, Benjamin; Lilja, Andrew; Webb, Robyn; Ferguson, Charles; Maher, Sheryl; Davis, Thomas P; Webb, Richard I; McGhee, John; Parton, Robert G
2018-02-01
Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a "real" cell. Early testing of this immersive environment indicates a significant improvement in students' understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment
ERIC Educational Resources Information Center
Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth
2009-01-01
This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments"…
An Evaluative Review of Simulated Dynamic Smart 3d Objects
NASA Astrophysics Data System (ADS)
Romeijn, H.; Sheth, F.; Pettit, C. J.
2012-07-01
Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.
Flexible Virtual Structure Consideration in Dynamic Modeling of Mobile Robots Formation
NASA Astrophysics Data System (ADS)
El Kamel, A. Essghaier; Beji, L.; Lerbet, J.; Abichou, A.
2009-03-01
In cooperative mobile robotics, we look for formation keeping and maintenance of a geometric configuration during movement. As a solution to these problems, the concept of a virtual structure is considered. Based on this idea, we have developed an efficient flexible virtual structure, describing the dynamic model of n vehicles in formation and where the whole formation is kept dependant. Notes that, for 2D and 3D space navigation, only a rigid virtual structure was proposed in the literature. Further, the problem was limited to a kinematic behavior of the structure. Hence, the flexible virtual structure in dynamic modeling of mobile robots formation presented in this paper, gives more capabilities to the formation to avoid obstacles in hostile environment while keeping formation and avoiding inter-agent collision.
Yee, Sophia Hui Xin; Esguerra, Roxanna Jean; Chew, Amelia Anya Qin'An; Wong, Keng Mun; Tan, Keson Beng Choon
2018-02-01
Accurate maxillomandibular relationship transfer is important for CAD/CAM prostheses. This study compared the 3D-accuracy of virtual model static articulation in three laboratory scanner-CAD systems (Ceramill Map400 [AG], inEos X5 [SIR], Scanner S600 Arti [ZKN]) using two virtual articulation methods: mounted models (MO), interocclusal record (IR). The master model simulated a single crown opposing a 3-unit fixed partial denture. Reference values were obtained by measuring interarch and interocclusal reference features with a coordinate measuring machine (CMM). MO group stone casts were articulator-mounted with acrylic resin bite registrations while IR group casts were hand-articulated with poly(vinyl siloxane) bite registrations. Five test model sets were scanned and articulated virtually with each system (6 test groups, 15 data sets). STL files of the virtual models were measured with CMM software. dR R , dR C , and dR L , represented interarch global distortions at right, central, and left sides, respectively, while dR M , dX M , dY M , and dZ M represented interocclusal global and linear distortions between preparations. Mean interarch 3D distortion ranged from -348.7 to 192.2 μm for dR R , -86.3 to 44.1 μm for dR C , and -168.1 to 4.4 μm for dR L . Mean interocclusal distortion ranged from -257.2 to -85.2 μm for dR M , -285.7 to 183.9 μm for dX M , -100.5 to 114.8 μm for dY M , and -269.1 to -50.6 μm for dZ M . ANOVA showed that articulation method had significant effect on dR R and dX M , while system had a significant effect on dR R , dR C , dR L , dR M , and dZ M . There were significant differences between 6 test groups for dR R, dR L dX M , and dZ M . dR R and dX M were significantly greater in AG-IR, and this was significantly different from SIR-IR, ZKN-IR, and all MO groups. Interarch and interocclusal distances increased in MO groups, while they decreased in IR groups. AG-IR had the greatest interarch distortion as well as interocclusal superior-inferior distortion. The other groups performed similarly to each other, and the overall interarch distortion did not exceed 0.7%. In these systems and articulation methods, interocclusal distortions may result in hyper- or infra-occluded prostheses. © 2017 by the American College of Prosthodontists.
Joda, Tim; Brägger, Urs; Gallucci, German
2015-01-01
Digital developments have led to the opportunity to compose simulated patient models based on three-dimensional (3D) skeletal, facial, and dental imaging. The aim of this systematic review is to provide an update on the current knowledge, to report on the technical progress in the field of 3D virtual patient science, and to identify further research needs to accomplish clinical translation. Searches were performed electronically (MEDLINE and OVID) and manually up to March 2014 for studies of 3D fusion imaging to create a virtual dental patient. Inclusion criteria were limited to human studies reporting on the technical protocol for superimposition of at least two different 3D data sets and medical field of interest. Of the 403 titles originally retrieved, 51 abstracts and, subsequently, 21 full texts were selected for review. Of the 21 full texts, 18 studies were included in the systematic review. Most of the investigations were designed as feasibility studies. Three different types of 3D data were identified for simulation: facial skeleton, extraoral soft tissue, and dentition. A total of 112 patients were investigated in the development of 3D virtual models. Superimposition of data on the facial skeleton, soft tissue, and/or dentition is a feasible technique to create a virtual patient under static conditions. Three-dimensional image fusion is of interest and importance in all fields of dental medicine. Future research should focus on the real-time replication of a human head, including dynamic movements, capturing data in a single step.
Virtual reality system for planning minimally invasive neurosurgery. Technical note.
Stadie, Axel Thomas; Kockro, Ralf Alfons; Reisch, Robert; Tropine, Andrei; Boor, Stephan; Stoeter, Peter; Perneczky, Axel
2008-02-01
The authors report on their experience with a 3D virtual reality system for planning minimally invasive neurosurgical procedures. Between October 2002 and April 2006, the authors used the Dextroscope (Volume Interactions, Ltd.) to plan neurosurgical procedures in 106 patients, including 100 with intracranial and 6 with spinal lesions. The planning was performed 1 to 3 days preoperatively, and in 12 cases, 3D prints of the planning procedure were taken into the operating room. A questionnaire was completed by the neurosurgeon after the planning procedure. After a short period of acclimatization, the system proved easy to operate and is currently used routinely for preoperative planning of difficult cases at the authors' institution. It was felt that working with a virtual reality multimodal model of the patient significantly improved surgical planning. The pathoanatomy in individual patients could easily be understood in great detail, enabling the authors to determine the surgical trajectory precisely and in the most minimally invasive way. The authors found the preoperative 3D model to be in high concordance with intraoperative conditions; the resulting intraoperative "déjà-vu" feeling enhanced surgical confidence. In all procedures planned with the Dextroscope, the chosen surgical strategy proved to be the correct choice. Three-dimensional virtual reality models of a patient allow quick and easy understanding of complex intracranial lesions.
Construction of a 3-D anatomical model for teaching temporal lobectomy.
de Ribaupierre, Sandrine; Wilson, Timothy D
2012-06-01
Although we live and work in 3 dimensional space, most of the anatomical teaching during medical school is done on 2-D (books, TV and computer screens, etc). 3-D spatial abilities are essential for a surgeon but teaching spatial skills in a non-threatening and safe educational environment is a much more difficult pedagogical task. Currently, initial anatomical knowledge formation or specific surgical anatomy techniques, are taught either in the OR itself, or in cadaveric labs; which means that the trainee has only limited exposure. 3-D computer models incorporated into virtual learning environments may provide an intermediate and key step in a blended learning approach for spatially challenging anatomical knowledge formation. Specific anatomical structures and their spatial orientation can be further clinically contextualized through demonstrations of surgical procedures in the 3-D digital environments. Recordings of digital models enable learner reviews, taking as much time as they want, stopping the demonstration, and/or exploring the model to understand the anatomical relation of each structure. We present here how a temporal lobectomy virtual model has been developed to aid residents and fellows conceptualization of the anatomical relationships between different cerebral structures during that procedure. We suggest in comparison to cadaveric dissection, such virtual models represent a cost effective pedagogical methodology providing excellent support for anatomical learning and surgical technique training. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.
2012-01-01
A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…
Effective visibility analysis method in virtual geographic environment
NASA Astrophysics Data System (ADS)
Li, Yi; Zhu, Qing; Gong, Jianhua
2008-10-01
Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.
Furdová, Alena; Sramka, Miron; Thurzo, Andrej; Furdová, Adriana
2017-01-01
Objective The objective of this study was to determine the use of 3D printed model of an eye with intraocular tumor for linear accelerator-based stereotactic radiosurgery. Methods The software for segmentation (3D Slicer) created virtual 3D model of eye globe with tumorous mass based on tissue density from computed tomography and magnetic resonance imaging data. A virtual model was then processed in the slicing software (Simplify3D®) and printed on 3D printer using fused deposition modeling technology. The material that was used for printing was polylactic acid. Results In 2015, stereotactic planning scheme was optimized with the help of 3D printed model of the patient’s eye with intraocular tumor. In the period 2001–2015, a group of 150 patients with uveal melanoma (139 choroidal melanoma and 11 ciliary body melanoma) were treated. The median tumor volume was 0.5 cm3 (0.2–1.6 cm3). The radiation dose was 35.0 Gy by 99% of dose volume histogram. Conclusion The 3D printed model of eye with tumor was helpful in planning the process to achieve the optimal scheme for irradiation which requires high accuracy of defining the targeted tumor mass and critical structures. PMID:28203052
Wang, Huixiang; Wang, Fang; Newman, Simon; Lin, Yanping; Chen, Xiaojun; Xu, Lu; Wang, Qiugen
2016-08-01
Acetabular fracture surgery is amongst the most challenging tasks in the field of trauma surgery and careful preoperative planning is crucial for success. The aim of this paper is to describe the preliminary outcome of the utilization of an innovative computerized virtual planning system for acetabular fractures. 3D models of acetabular fractures and surrounding soft tissues from six patients were constructed from preoperative CT scans. A novel highly-automatic segmentation technique was performed on the 3D model to separate each fracture fragment, then 3D virtual reduction was performed. Additionally, the models were used to assess potential surgical approaches with reference to both the fracture and the surrounding soft tissues. The time required for virtual planning was recorded. After surgery, the virtual plan was compared to the real surgery with respect to surgical approach and reduction sequence. A Likert scale questionnaire was completed by the surgeons to evaluate their satisfaction with the system. Virtual planning was successfully completed in all cases. The planned surgical approach was followed in all cases with the planned reduction sequence followed completely in five cases and partially in one. The mean time required for virtual planning was 38.7min (range 21-57, SD=15.5). The mean time required for planning of B-type fractures was 25.0min (range 21-30, SD=4.6), of C-type fracture 52.3min (range 49-57, SD=4.2). The results of the questionnaire demonstrated a high level of satisfaction with the planning system. This study demonstrates that the virtual planning system is feasible in clinical settings with high satisfaction and acceptability from the surgeons. It provides a viable option for the planning of acetabular fracture surgery. Copyright © 2016 Elsevier Ltd. All rights reserved.
The virtual dissecting room: Creating highly detailed anatomy models for educational purposes.
Zilverschoon, Marijn; Vincken, Koen L; Bleys, Ronald L A W
2017-01-01
Virtual 3D models are powerful tools for teaching anatomy. At the present day, there are a lot of different digital anatomy models, most of these commercial applications are based on a 3D model of a human body reconstructed from images with a 1mm intervals. The use of even smaller intervals may result in more details and more realistic appearances of 3D anatomy models. The aim of this study was to create a realistic and highly detailed 3D model of the hand and wrist based on small interval cross-sectional images, suitable for undergraduate and postgraduate teaching purposes with the possibility to perform a virtual dissection in an educational application. In 115 transverse cross-sections from a human hand and wrist, segmentation was done by manually delineating 90 different structures. With the use of Amira the segments were imported and a surface model/polygon model was created, followed by smoothening of the surfaces in Mudbox. In 3D Coat software the smoothed polygon models were automatically retopologied into a quadrilaterals formation and a UV map was added. In Mudbox, the textures from 90 structures were depicted in a realistic way by using photos from real tissue and afterwards height maps, gloss and specular maps were created to add more level of detail and realistic lightning on every structure. Unity was used to build a new software program that would support all the extra map features together with a preferred user interface. A 3D hand model has been created, containing 100 structures (90 at start and 10 extra structures added along the way). The model can be used interactively by changing the transparency, manipulating single or grouped structures and thereby simulating a virtual dissection. This model can be used for a variety of teaching purposes, ranging from undergraduate medical students to residents of hand surgery. Studying the hand and wrist anatomy using this model is cost-effective and not hampered by the limited access to real dissecting facilities. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Irwansyah; Sinh, N. P.; Lai, J. Y.; Essomba, T.; Asbar, R.; Lee, P. Y.
2018-02-01
In this paper, we present study to integrate virtual fracture bone reduction simulation tool with a novel hybrid 3-DOF-RPS external fixator to relocate back bone fragments into their anatomically original position. A 3D model of fractured bone was reconstructed and manipulated using 3D design and modeling software, PhysiGuide. The virtual reduction system was applied to reduce a bilateral femoral shaft fracture type 32-A3. Measurement data from fracture reduction and fixation stages were implemented to manipulate the manipulator pose in patient’s clinical case. The experimental result presents that by merging both of those techniques will give more possibilities to reduce virtual bone reduction time, improve facial and shortest healing treatment.
Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.
Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N
2018-05-01
Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
3D geospatial visualizations: Animation and motion effects on spatial objects
NASA Astrophysics Data System (ADS)
Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos
2018-02-01
Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.
ERIC Educational Resources Information Center
Giardina, Max
This paper examines the implementation of 3D simulation through the development of the Avenor Virtual Trainer and how situated learning and fidelity of model representation become the basis for more effective Interactive Multimedia Training Situations. The discussion will focus of some principles concerned with situated training, simulation,…
Navigation system for robot-assisted intra-articular lower-limb fracture surgery.
Dagnino, Giulio; Georgilas, Ioannis; Köhler, Paul; Morad, Samir; Atkins, Roger; Dogramadzi, Sanja
2016-10-01
In the surgical treatment for lower-leg intra-articular fractures, the fragments have to be positioned and aligned to reconstruct the fractured bone as precisely as possible, to allow the joint to function correctly again. Standard procedures use 2D radiographs to estimate the desired reduction position of bone fragments. However, optimal correction in a 3D space requires 3D imaging. This paper introduces a new navigation system that uses pre-operative planning based on 3D CT data and intra-operative 3D guidance to virtually reduce lower-limb intra-articular fractures. Physical reduction in the fractures is then performed by our robotic system based on the virtual reduction. 3D models of bone fragments are segmented from CT scan. Fragments are pre-operatively visualized on the screen and virtually manipulated by the surgeon through a dedicated GUI to achieve the virtual reduction in the fracture. Intra-operatively, the actual position of the bone fragments is provided by an optical tracker enabling real-time 3D guidance. The motion commands for the robot connected to the bone fragment are generated, and the fracture physically reduced based on the surgeon's virtual reduction. To test the system, four femur models were fractured to obtain four different distal femur fracture types. Each one of them was subsequently reduced 20 times by a surgeon using our system. The navigation system allowed an orthopaedic surgeon to virtually reduce the fracture with a maximum residual positioning error of [Formula: see text] (translational) and [Formula: see text] (rotational). Correspondent physical reductions resulted in an accuracy of 1.03 ± 0.2 mm and [Formula: see text], when the robot reduced the fracture. Experimental outcome demonstrates the accuracy and effectiveness of the proposed navigation system, presenting a fracture reduction accuracy of about 1 mm and [Formula: see text], and meeting the clinical requirements for distal femur fracture reduction procedures.
Generating classes of 3D virtual mandibles for AR-based medical simulation.
Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P
2008-01-01
Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.
Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke
2018-01-01
Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
Photorealistic virtual anatomy based on Chinese Visible Human data.
Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y
2006-04-01
Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.
Fónyad, László; Shinoda, Kazunobu; Farkash, Evan A; Groher, Martin; Sebastian, Divya P; Szász, A Marcell; Colvin, Robert B; Yagi, Yukako
2015-03-28
Chronic allograft vasculopathy (CAV) is a major mechanism of graft failure of transplanted organs in humans. Morphometric analysis of coronary arteries enables the quantitation of CAV in mouse models of heart transplantation. However, conventional histological procedures using single 2-dimensional sections limit the accuracy of CAV quantification. The aim of this study is to improve the accuracy of CAV quantification by reconstructing the murine coronary system in 3-dimensions (3D) and using virtual reconstruction and volumetric analysis to precisely assess neointimal thickness. Mouse tissue samples, native heart and transplanted hearts with chronic allograft vasculopathy, were collected and analyzed. Paraffin embedded samples were serially sectioned, stained and digitized using whole slide digital imaging techniques under normal and ultraviolet lighting. Sophisticated software tools were used to generate and manipulate 3D reconstructions of the major coronary arteries and branches. The 3D reconstruction provides not only accurate measurements but also exact volumetric data of vascular lesions. This virtual coronary arteriography demonstrates that the vasculopathy lesions in this model are localized to the proximal coronary segments. In addition, virtual rotation and volumetric analysis enabled more precise measurements of CAV than single, randomly oriented histologic sections, and offer an improved readout for this important experimental model. We believe 3D reconstruction of 2D histological slides will provide new insights into pathological mechanisms in which structural abnormalities play a role in the development of a disease. The techniques we describe are applicable to the analysis of arteries, veins, bronchioles and similar sized structures in a variety of tissue types and disease model systems. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/3772457541477230 .
Virtual Solar System Project: Building Understanding through Model Building.
ERIC Educational Resources Information Center
Barab, Sasha A.; Hay, Kenneth E.; Barnett, Michael; Keating, Thomas
2000-01-01
Describes an introductory astronomy course for undergraduate students in which students use three-dimensional (3-D) modeling tools to model the solar system and develop rich understandings of astronomical phenomena. Indicates that 3-D modeling can be used effectively in regular undergraduate university courses as a tool to develop understandings…
ERIC Educational Resources Information Center
Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel
2015-01-01
A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…
Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life
ERIC Educational Resources Information Center
Minocha, Shailey; Morse, David R.
2010-01-01
Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…
Ferng, Alice S; Oliva, Isabel; Jokerst, Clinton; Avery, Ryan; Connell, Alana M; Tran, Phat L; Smith, Richard G; Khalpey, Zain
2017-08-01
Since the creation of SynCardia's 50 cc Total Artificial Hearts (TAHs), patients with irreversible biventricular failure now have two sizing options. Herein, a case series of three patients who have undergone successful 50 and 70 cc TAH implantation with complete closure of the chest cavity utilizing preoperative "virtual implantation" of different sized devices for surgical planning are presented. Computed tomography (CT) images were used for preoperative planning prior to TAH implantation. Three-dimensional (3D) reconstructions of preoperative chest CT images were generated and both 50 and 70 cc TAHs were virtually implanted into patients' thoracic cavities. During the simulation, the TAHs were projected over the native hearts in a similar position to the actual implantation, and the relationship between the devices and the atria, ventricles, chest wall, and diaphragm were assessed. The 3D reconstructed images and virtual modeling were used to simulate and determine for each patient if the 50 or 70 cc TAH would have a higher likelihood of successful implantation without complications. Subsequently, all three patients received clinical implants of the properly sized TAH based on virtual modeling, and their chest cavities were fully closed. This virtual implantation increases our confidence that the selected TAH will better fit within the thoracic cavity allowing for improved surgical outcome. Clinical implantation of the TAHs showed that our virtual modeling was an effective method for determining the correct fit and sizing of 50 and 70 cc TAHs. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Zhang, Nan; Liu, Shuguang; Hu, Zhiai; Hu, Jing; Zhu, Songsong; Li, Yunfeng
2016-08-01
This study aims to evaluate the accuracy of virtual surgical planning in two-jaw orthognathic surgery via quantitative comparison of preoperative planned and postoperative actual skull models. Thirty consecutive patients who required two-jaw orthognathic surgery were included. A composite skull model was reconstructed by using Digital Imaging and Communications in Medicine (DICOM) data from spiral computed tomography (CT) and STL (stereolithography) data from surface scanning of the dental arch. LeFort I osteotomy of the maxilla and bilateral sagittal split ramus osteotomy (of the mandible were simulated by using Dolphin Imaging 11.7 Premium (Dolphin Imaging and Management Solutions, Chatsworth, CA). Genioplasty was performed, if indicated. The virtual plan was then transferred to the operation room by using three-dimensional (3-D)-printed surgical templates. Linear and angular differences between virtually simulated and postoperative skull models were evaluated. The virtual surgical planning was successfully transferred to actual surgery with the help of 3-D-printed surgical templates. All patients were satisfied with the postoperative facial profile and occlusion. The overall mean linear difference was 0.81 mm (0.71 mm for the maxilla and 0.91 mm for the mandible); and the overall mean angular difference was 0.95 degrees. Virtual surgical planning and 3-D-printed surgical templates facilitated the diagnosis, treatment planning, and accurate repositioning of bony segments in two-jaw orthognathic surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
Design of a 3D Navigation Technique Supporting VR Interaction
NASA Astrophysics Data System (ADS)
Boudoin, Pierre; Otmane, Samir; Mallem, Malik
2008-06-01
Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.
Cognitive Aspects of Collaboration in 3d Virtual Environments
NASA Astrophysics Data System (ADS)
Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.
2016-06-01
Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos
2014-05-01
This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.
A collaborative virtual reality environment for neurosurgical planning and training.
Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel
2007-11-01
We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.
Advanced 3-dimensional planning in neurosurgery.
Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni
2013-01-01
During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.
Development of an interactive anatomical three-dimensional eye model.
Allen, Lauren K; Bhattacharyya, Siddhartha; Wilson, Timothy D
2015-01-01
The discrete anatomy of the eye's intricate oculomotor system is conceptually difficult for novice students to grasp. This is problematic given that this group of muscles represents one of the most common sites of clinical intervention in the treatment of ocular motility disorders and other eye disorders. This project was designed to develop a digital, interactive, three-dimensional (3D) model of the muscles and cranial nerves of the oculomotor system. Development of the 3D model utilized data from the Visible Human Project (VHP) dataset that was refined using multiple forms of 3D software. The model was then paired with a virtual user interface in order to create a novel 3D learning tool for the human oculomotor system. Development of the virtual eye model was done while attempting to adhere to the principles of cognitive load theory (CLT) and the reduction of extraneous load in particular. The detailed approach, digital tools employed, and the CLT guidelines are described herein. © 2014 American Association of Anatomists.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Modeling mechanical cardiopulmonary interactions for virtual environments.
Kaye, J M
1997-01-01
We have developed a computer system for modeling mechanical cardiopulmonary behavior in an interactive, 3D virtual environment. The system consists of a compact, scalar description of cardiopulmonary mechanics, with an emphasis on respiratory mechanics, that drives deformable 3D anatomy to simulate mechanical behaviors of and interactions between physiological systems. Such an environment can be used to facilitate exploration of cardiopulmonary physiology, particularly in situations that are difficult to reproduce clinically. We integrate 3D deformable body dynamics with new, formal models of (scalar) cardiorespiratory physiology, associating the scalar physiological variables and parameters with corresponding 3D anatomy. Our approach is amenable to modeling patient-specific circumstances in two ways. First, using CT scan data, we apply semi-automatic methods for extracting and reconstructing the anatomy to use in our simulations. Second, our scalar models are defined in terms of clinically-measurable, patient-specific parameters. This paper describes our approach and presents a sample of results showing normal breathing and acute effects of pneumothoraces.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
Factors Affecting Training Effectiveness in Synchronous, Dispersed Virtual Environments
2014-06-01
technology to its technical training programs. Specifically, a distance learning model with instruction provided through 3-D virtual worlds could...ABSTRACT The U.S. Navy is investigating the feasibility of incorporating distance learning technology to its technical training programs. Specifically...15 A. TECHNOLOGY ACCEPTANCE MODEL
3D Modeling of Glacial Erratic Boulders in the Haizi Shan Region, Eastern Tibetan Plateau
NASA Astrophysics Data System (ADS)
Sheriff, M.; Stevens, J.; Radue, M. J.; Strand, P.; Zhou, W.; Putnam, A. E.
2017-12-01
The focus of our team's research is to study patterns of glacier retreat in the Northern and Southern Hemispheres at the end of the last ice age. Our purpose is to search for what caused this great global warming. Such information will improve understanding of how the climate system may respond to the human-induced buildup of fossil carbon dioxide. To reconstruct past glacier behavior, we sample boulders deposited by glaciers to find the rate of ancient recession. Each sample is tested to determine the age of the boulder using 10Be cosmogenic-nuclide dating. My portion of this research focuses on creating 3D models of the sampled boulders. Such high-resolution 3D models afford visual inspection and analysis of each boulder in a virtual reality environment after fieldwork is complete. Such detailed virtual reconstructions will aid post-fieldwork evaluation of sampled boulders. This will help our team interpret 10Be dating results. For example, a high-resolution model can aid post-fieldwork observations, and allow scientists to determine whether the rock has been previously covered, eroded, or moved since it was deposited by the glacier, but before the sample was collected. Also a model can be useful for recognizing patterns between age and boulder morphology. Lastly, the models can be used for those who wish to review the data after publication. To create the 3D models, I will use Hero4 GoPro and Canon PowerShot digital cameras to collect photographs of each boulder from different angles. I will then process the digital imagery using `structure-from-motion' techniques and Agisoft Photoscan software. All boulder photographs will be synthesized to 3D and based on a standardized scale. We will then import these models into an environment that can be accessed using cutting-edge virtual reality technology. By producing a virtual archive of 3D glacial boulder reconstructions, I hope to provide deeper insight into geological processes influencing these boulders during and since their deposition, and ultimately to improve methods that are being used to develop glacial histories on a global scale.
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Do, Phuong T.; Moreland, John R.; Delgado, Catherine
Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less
Do, Phuong T.; Moreland, John R.; Delgado, Catherine; ...
2013-01-01
Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less
NASA Astrophysics Data System (ADS)
Gong, Jun; Zhu, Qing
2006-10-01
As the special case of VGE in the fields of AEC (architecture, engineering and construction), Virtual Building Environment (VBE) has been broadly concerned. Highly complex, large-scale 3d spatial data is main bottleneck of VBE applications, so 3d spatial data organization and management certainly becomes the core technology for VBE. This paper puts forward 3d spatial data model for VBE, and the performance to implement it is very high. Inherent storage method of CAD data makes data redundant, and doesn't concern efficient visualization, which is a practical bottleneck to integrate CAD model, so An Efficient Method to Integrate CAD Model Data is put forward. Moreover, Since the 3d spatial indices based on R-tree are usually limited by their weakness of low efficiency due to the severe overlap of sibling nodes and the uneven size of nodes, a new node-choosing algorithm of R-tree are proposed.
Kim, Dae-Seung; Woo, Sang-Yoon; Yang, Hoon Joo; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon Jung; Yi, Won-Jin
2014-12-01
Accurate surgical planning and transfer of the planning in orthognathic surgery are very important in achieving a successful surgical outcome with appropriate improvement. Conventionally, the paper surgery is performed based on a 2D cephalometric radiograph, and the results are expressed using cast models and an articulator. We developed an integrated orthognathic surgery system with 3D virtual planning and image-guided transfer. The maxillary surgery of orthognathic patients was planned virtually, and the planning results were transferred to the cast model by image guidance. During virtual planning, the displacement of the reference points was confirmed by the displacement from conventional paper surgery at each procedure. The results of virtual surgery were transferred to the physical cast models directly through image guidance. The root mean square (RMS) difference between virtual surgery and conventional model surgery was 0.75 ± 0.51 mm for 12 patients. The RMS difference between virtual surgery and image-guidance results was 0.78 ± 0.52 mm, which showed no significant difference from the difference of conventional model surgery. The image-guided orthognathic surgery system integrated with virtual planning will replace physical model surgical planning and enable transfer of the virtual planning directly without the need for an intermediate splint. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Murphy, M.; Chenaux, A.; Keenaghan, G.; GIbson, V..; Butler, J.; Pybusr, C.
2017-08-01
In this paper the recording and design for a Virtual Reality Immersive Model of Armagh Observatory is presented, which will replicate the historic buildings and landscape with distant meridian markers and position of its principal historic instruments within a model of the night sky showing the position of bright stars. The virtual reality model can be used for educational purposes allowing the instruments within the historic building model to be manipulated within 3D space to demonstrate how the position measurements of stars were made in the 18th century. A description is given of current student and researchers activities concerning on-site recording and surveying and the virtual modelling of the buildings and landscape. This is followed by a design for a Virtual Reality Immersive Model of Armagh Observatory use game engine and virtual learning platforms and concepts.
ERIC Educational Resources Information Center
Roth, Jeremy A.; Wilson, Timothy D.; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated…
[Preparation of simulate craniocerebral models via three dimensional printing technique].
Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T
2016-08-09
Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.
Uchida, Masafumi
2014-04-01
A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Galantucci, Luigi Maria; Percoco, Gianluca; Lavecchia, Fulvio; Di Gioia, Eliana
2013-05-01
The article describes a new methodology to scan and integrate facial soft tissue surface with dental hard tissue models in a three-dimensional (3D) virtual environment, for a novel diagnostic approach.The facial and the dental scans can be acquired using any optical scanning systems: the models are then aligned and integrated to obtain a full virtual navigable representation of the head of the patient. In this article, we report in detail and further implemented a method for integrating 3D digital cast models into a 3D facial image, to visualize the anatomic position of the dentition. This system uses several 3D technologies to scan and digitize, integrating them with traditional dentistry records. The acquisitions were mainly performed using photogrammetric scanners, suitable for clinics or hospitals, able to obtain high mesh resolution and optimal surface texture for the photorealistic rendering of the face. To increase the quality and the resolution of the photogrammetric scanning of the dental elements, the authors propose a new technique to enhance the texture of the dental surface. Three examples of the application of the proposed procedure are reported in this article, using first laser scanning and photogrammetry and then only photogrammetry. Using cheek retractors, it is possible to scan directly a great number of dental elements. The final results are good navigable 3D models that integrate facial soft tissue and dental hard tissues. The method is characterized by the complete absence of ionizing radiation, portability and simplicity, fast acquisition, easy alignment of the 3D models, and wide angle of view of the scanner. This method is completely noninvasive and can be repeated any time the physician needs new clinical records. The 3D virtual model is a precise representation both of the soft and the hard tissue scanned, and it is possible to make any dimensional measure directly in the virtual space, for a full integrated 3D anthropometry and cephalometry. Moreover, the authors propose a method completely based on close-range photogrammetric scanning, able to detect facial and dental surfaces, and reducing the time, the complexity, and the cost of the scanning operations and the numerical elaboration.
Applied virtual reality at the Research Triangle Institute
NASA Technical Reports Server (NTRS)
Montoya, R. Jorge
1994-01-01
Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.
Applicability of three-dimensional imaging techniques in fetal medicine*
Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson
2016-01-01
Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540
A three-dimensional virtual environment for modeling mechanical cardiopulmonary interactions.
Kaye, J M; Primiano, F P; Metaxas, D N
1998-06-01
We have developed a real-time computer system for modeling mechanical physiological behavior in an interactive, 3-D virtual environment. Such an environment can be used to facilitate exploration of cardiopulmonary physiology, particularly in situations that are difficult to reproduce clinically. We integrate 3-D deformable body dynamics with new, formal models of (scalar) cardiorespiratory physiology, associating the scalar physiological variables and parameters with the corresponding 3-D anatomy. Our framework enables us to drive a high-dimensional system (the 3-D anatomical models) from one with fewer parameters (the scalar physiological models) because of the nature of the domain and our intended application. Our approach is amenable to modeling patient-specific circumstances in two ways. First, using CT scan data, we apply semi-automatic methods for extracting and reconstructing the anatomy to use in our simulations. Second, our scalar physiological models are defined in terms of clinically measurable, patient-specific parameters. This paper describes our approach, problems we have encountered and a sample of results showing normal breathing and acute effects of pneumothoraces.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
3D Virtual Environment Used to Support Lighting System Management in a Building
NASA Astrophysics Data System (ADS)
Sampaio, A. Z.; Ferreira, M. M.; Rosário, D. P.
The main aim of the research project, which is in progress at the UTL, is to develop a virtual interactive model as a tool to support decision-making in the planning of construction maintenance and facilities management. The virtual model gives the capacity to allow the user to transmit, visually and interactively, information related to the components of a building, defined as a function of the time variable. In addition, the analysis of solutions for repair work/substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The first component of the virtual prototype concerns the management of lamps in a lighting system. It was applied in a study case. The interactive application allows the examination of the physical model, visualizing, for each element modeled in 3D and linked to a database, the corresponding technical information concerned with the use of the material, calculated for different points in time during their life. The control of a lamp stock, the constant updating of lifetime information and the planning of periodical local inspections are attended on the prototype. This is an important mean of cooperation between collaborators involved in the building management.
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills
NASA Astrophysics Data System (ADS)
Choi, Kup-Sze
This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.
NASA Technical Reports Server (NTRS)
2002-01-01
Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.
Virtual environments simulation in research reactor
NASA Astrophysics Data System (ADS)
Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin
2017-01-01
Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.
Automatic 3D virtual scenes modeling for multisensors simulation
NASA Astrophysics Data System (ADS)
Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu
2006-05-01
SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
Three-dimensional printing in cardiology: Current applications and future challenges.
Luo, Hongxing; Meyer-Szary, Jarosław; Wang, Zhongmin; Sabiniewicz, Robert; Liu, Yuhao
2017-01-01
Three-dimensional (3D) printing has attracted a huge interest in recent years. Broadly speaking, it refers to the technology which converts a predesigned virtual model to a touchable object. In clinical medicine, it usually converts a series of two-dimensional medical images acquired through computed tomography, magnetic resonance imaging or 3D echocardiography into a physical model. Medical 3D printing consists of three main steps: image acquisition, virtual reconstruction and 3D manufacturing. It is a promising tool for preoperative evaluation, medical device design, hemodynamic simulation and medical education, it is also likely to reduce operative risk and increase operative success. However, the most relevant studies are case reports or series which are underpowered in testing its actual effect on patient outcomes. The decision of making a 3D cardiac model may seem arbitrary since it is mostly based on a cardiologist's perceived difficulty in performing an interventional procedure. A uniform consensus is urgently necessary to standardize the key steps of 3D printing from imaging acquisition to final production. In the future, more clinical trials of rigorous design are possible to further validate the effect of 3D printing on the treatment of cardiovascular diseases. (Cardiol J 2017; 24, 4: 436-444).
ERIC Educational Resources Information Center
Lau, Kung Wong
2015-01-01
Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…
Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars
2017-10-01
3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.
Virtual environment display for a 3D audio room simulation
NASA Technical Reports Server (NTRS)
Chapin, William L.; Foster, Scott H.
1992-01-01
The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.
2011-01-01
Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
The Development of a Virtual Dinosaur Museum
ERIC Educational Resources Information Center
Tarng, Wernhuar; Liou, Hsin-Hun
2007-01-01
The objective of this article is to study the network and virtual reality technologies for developing a virtual dinosaur museum, which provides a Web-learning environment for students of all ages and the general public to know more about dinosaurs. We first investigate the method for building the 3D dynamic models of dinosaurs, and then describe…
Korocsec, D; Holobar, A; Divjak, M; Zazula, D
2005-12-01
Medicine is a difficult thing to learn. Experimenting with real patients should not be the only option; simulation deserves a special attention here. Virtual Reality Modelling Language (VRML) as a tool for building virtual objects and scenes has a good record of educational applications in medicine, especially for static and animated visualisations of body parts and organs. However, to create computer simulations resembling situations in real environments the required level of interactivity and dynamics is difficult to achieve. In the present paper we describe some approaches and techniques which we used to push the limits of the current VRML technology further toward dynamic 3D representation of virtual environments (VEs). Our demonstration is based on the implementation of a virtual baby model, whose vital signs can be controlled from an external Java application. The main contributions of this work are: (a) outline and evaluation of the three-level VRML/Java implementation of the dynamic virtual environment, (b) proposal for a modified VRML Timesensor node, which greatly improves the overall control of system performance, and (c) architecture of the prototype distributed virtual environment for training in neonatal resuscitation comprising the interactive virtual newborn, active bedside monitor for vital signs and full 3D representation of the surgery room.
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
Design and fabrication of complete dentures using CAD/CAM technology
Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi
2017-01-01
Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686
Web-based interactive 3D visualization as a tool for improved anatomy learning.
Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan
2009-01-01
Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.
Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet
2018-05-01
The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.
Accuracy of contacts calculated from 3D images of occlusal surfaces.
DeLong, R; Knorr, S; Anderson, G C; Hodges, J; Pintado, M R
2007-06-01
Compare occlusal contacts calculated from 3D virtual models created from clinical records to contacts identified clinically using shimstock and transillumination. Upper and lower full arch alginate impressions and vinyl polysiloxane centric interocclusal records were made of 12 subjects. Stone casts made from the alginate impressions and the interocclusal records were optically scanned. Three-dimensional virtual models of the dental arches and interocclusal records were constructed using the Virtual Dental Patient Software. Contacts calculated from the virtual interocclusal records and from the aligned upper and lower virtual arch models were compared to those identified clinically using 0.01mm shimstock and transillumination of the interocclusal record. Virtual contacts and transillumination contacts were compared by anatomical region and by contacting tooth pairs to shimstock contacts. Because there is no accepted standard for identifying occlusal contacts, methods were compared in pairs with one labeled "standard" and the second labeled "test". Accuracy was defined as the number of contacts and non-contacts of the "test" that were in agreement with the "standard" divided by the total number of contacts and non-contacts of the "standard". Accuracy of occlusal contacts calculated from virtual interocclusal records and aligned virtual casts compared to transillumination were: 0.87+/-0.05 and 0.84+/-0.06 by region and 0.95+/-0.07 and 0.95+/-0.05 by tooth, respectively. Comparisons with shimstock were: 0.85+/-0.15 (record), 0.84+/-0.14 (casts), and 81+/-17 (transillumination). The virtual record, aligned virtual arches, and transillumination methods of identifying contacts are equivalent, and show better agreement with each other than with the shimstock method.
Use of 3D techniques for virtual production
NASA Astrophysics Data System (ADS)
Grau, Oliver; Price, Marc C.; Thomas, Graham A.
2000-12-01
Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.
D Modelling and Mapping for Virtual Exploration of Underwater Archaeology Assets
NASA Astrophysics Data System (ADS)
Liarokapis, F.; Kouřil, P.; Agrafiotis, P.; Demesticha, S.; Chmelík, J.; Skarlatos, D.
2017-02-01
This paper investigates immersive technologies to increase exploration time in an underwater archaeological site, both for the public, as well as, for researchers and scholars. Focus is on the Mazotos shipwreck site in Cyprus, which is located 44 meters underwater. The aim of this work is two-fold: (a) realistic modelling and mapping of the site and (b) an immersive virtual reality visit. For 3D modelling and mapping optical data were used. The underwater exploration is composed of a variety of sea elements including: plants, fish, stones, and artefacts, which are randomly positioned. Users can experience an immersive virtual underwater visit in Mazotos shipwreck site and get some information about the shipwreck and its contents for raising their archaeological knowledge and cultural awareness.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
ERIC Educational Resources Information Center
Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei
2017-01-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…
Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.
Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques
2015-04-01
Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
NASA Astrophysics Data System (ADS)
Komosinski, Maciej; Ulatowski, Szymon
Life is one of the most complex phenomena known in our world. Researchers construct various models of life that serve diverse purposes and are applied in a wide range of areas — from medicine to entertainment. A part of artificial life research focuses on designing three-dimensional (3D) models of life-forms, which are obviously appealing to observers because the world we live in is three dimensional. Thus, we can easily understand behaviors demonstrated by virtual individuals, study behavioral changes during simulated evolution, analyze dependencies between groups of creatures, and so forth. However, 3D models of life-forms are not only attractive because of their resemblance to the real-world organisms. Simulating 3D agents has practical implications: If the simulation is accurate enough, then real robots can be built based on the simulation, as in [22]. Agents can be designed, tested, and optimized in a virtual environment, and the best ones can be constructed as real robots with embedded control systems. This way artificial intelligence algorithms can be “embodied” in the 3D mechanical constructs.
Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Bourantas, Christos V; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Serruys, Patrick W; Michalis, Lampros K
2018-03-01
Fractional flow reserve (FFR) has been established as a useful diagnostic tool. The distal coronary pressure to aortic pressure (Pd/Pa) ratio at rest is a simpler physiologic index but also requires the use of the pressure wire, whereas recently proposed virtual functional indices derived from coronary imaging require complex blood flow modelling and/or are time-consuming. Our aim was to test the diagnostic performance of virtual resting Pd/Pa using routine angiographic images and a simple flow model. Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by FFR. The resting Pd/Pa for each lesion was assessed by computational fluid dynamics. The discriminatory power of virtual resting Pd/Pa against FFR (reference: ≤0.80) was high (area under the receiver operator characteristic curve [AUC]: 90.5% [95% CI: 85.4-95.6%]). Diagnostic accuracy, sensitivity and specificity for the optimal virtual resting Pd/Pa cut-off (≤0.94) were 84.9%, 90.4% and 81.6%, respectively. Virtual resting Pd/Pa demonstrated superior performance (p<0.001) versus 3D-QCA %area stenosis (AUC: 77.5% [95% CI: 69.8-85.3%]). There was a good correlation between virtual resting Pd/Pa and FFR (r=0.69, p<0.001). Virtual resting Pd/Pa using routine angiographic data and a simple flow model provides fast functional assessment of coronary lesions without requiring the pressure-wire and hyperaemia induction. The high diagnostic performance of virtual resting Pd/Pa for predicting FFR shows promise for using this simple/fast virtual index in clinical practice. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei
2017-11-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Software for Building Models of 3D Objects via the Internet
NASA Technical Reports Server (NTRS)
Schramer, Tim; Jensen, Jeff
2003-01-01
The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.
a Proposal for Generalization of 3d Models
NASA Astrophysics Data System (ADS)
Uyar, A.; Ulugtekin, N. N.
2017-11-01
In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.
Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416
Creation of 3D multi-body orthodontic models by using independent imaging sensors.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-02-05
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.
ERIC Educational Resources Information Center
Keating, Thomas; Barnett, Michael; Barab, Sasha A.; Hay, Kenneth E.
2002-01-01
Describes the Virtual Solar System (VSS) course which is one of the first attempts to integrate three-dimensional (3-D) computer modeling as a central component of introductory undergraduate education. Assesses changes in student understanding of astronomy concepts as a result of participating in an experimental introductory astronomy course in…
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-01-01
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q2 = 0.621, r2pred = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained. PMID:26035757
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-05-29
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q(2) = 0.621, r(2)(pred) = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained.
Virtual VMASC: A 3D Game Environment
NASA Technical Reports Server (NTRS)
Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen
2010-01-01
The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Anez, Francisco
This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Course Design and Student Responses to an Online PBL Course in 3D Modelling for Mining Engineers
ERIC Educational Resources Information Center
McAlpine, Iain; Stothard, Phillip
2005-01-01
To enhance a course in 3D Virtual Reality (3D VR) modelling for mining engineers, and to create the potential for off campus students to fully engage with the course, a problem based learning (PBL) approach was applied to the course design and all materials and learning activities were provided online. This paper outlines some of the theoretical…
Handling Massive Models: Representation, Real-Time Display and Interaction
2008-09-16
Published, K. Ward, N. Galoppo, and M. Lin, "Interactive Virtual Hair Salon ", Presence, p. , vol. , (2007). Published, K. Ward, F. Bertails, T.-Y...Detection for Deformable Models using Representative-Triangles", Symposium on Interactive 3D Graphics and Games , p. , vol. , (2008). Published...Interactive 3D Graphics and Games (I3D), p. , vol. , (2008). Published, Brandon Lloyd, Naga K. Govindaraju, Cory Quammen, Steven E. Molnar, Dinesh
VRLane: a desktop virtual safety management program for underground coal mine
NASA Astrophysics Data System (ADS)
Li, Mei; Chen, Jingzhu; Xiong, Wei; Zhang, Pengpeng; Wu, Daozheng
2008-10-01
VR technologies, which generate immersive, interactive, and three-dimensional (3D) environments, are seldom applied to coal mine safety work management. In this paper, a new method that combined the VR technologies with underground mine safety management system was explored. A desktop virtual safety management program for underground coal mine, called VRLane, was developed. The paper mainly concerned about the current research advance in VR, system design, key techniques and system application. Two important techniques were introduced in the paper. Firstly, an algorithm was designed and implemented, with which the 3D laneway models and equipment models can be built on the basis of the latest mine 2D drawings automatically, whereas common VR programs established 3D environment by using 3DS Max or the other 3D modeling software packages with which laneway models were built manually and laboriously. Secondly, VRLane realized system integration with underground industrial automation. VRLane not only described a realistic 3D laneway environment, but also described the status of the coal mining, with functions of displaying the run states and related parameters of equipment, per-alarming the abnormal mining events, and animating mine cars, mine workers, or long-wall shearers. The system, with advantages of cheap, dynamic, easy to maintenance, provided a useful tool for safety production management in coal mine.
Extensible 3D (X3D) Earth Technical Requirements Workshop Summary Report
2007-08-01
world in detail already, but rarely interconnect on to another • Most interesting part of “virtual reality” (VR) is reality – which means physics... Two Web-Enabled Modeling and Simulation (WebSim) symposia have demonstrated that large partnerships can work 9. Server-side 3D graphics • Our
NASA Astrophysics Data System (ADS)
Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos
2016-04-01
This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.
3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.
2015-08-01
Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.
The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.
Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German
2014-01-01
Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. © 2014 American Association of Anatomists.
3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP
ERIC Educational Resources Information Center
Cetin, Aydin; Guler, Inan
2011-01-01
Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…
A Novel and Freely Available Interactive 3d Model of the Internal Carotid Artery.
Valera-Melé, Marc; Puigdellívol-Sánchez, Anna; Mavar-Haramija, Marija; Juanes-Méndez, Juan A; San-Román, Luis; de Notaris, Matteo; Prats-Galino, Alberto
2018-03-05
We describe a new and freely available 3D interactive model of the intracranial internal carotid artery (ICA) and the skull base that also allows to display and compare its main segment classifications. High-resolution 3D human angiography (isometric voxel's size 0.36 mm) and Computed Tomography angiography images were exported to Virtual Reality Modeling Language (VRML) format for processing in a 3D software platform and embedding in a 3D Portable Document Format (PDF) document that can be freely downloaded at http://diposit.ub.edu/dspace/handle/2445/112442 and runs under Acrobat Reader on Mac and Windows computers and Windows 10 tablets. The 3D-PDF allows for visualisation and interaction through JavaScript-based functions (including zoom, rotation, selective visualization and transparentation of structures or a predefined sequence view of the main segment classifications if desired). The ICA and its main branches and loops, the Gasserian ganglion, the petrolingual ligament and the proximal and distal dural rings within the skull base environment (anterior and posterior clinoid processes, silla turcica, ethmoid and sphenoid bones, orbital fossae) may be visualized from different perspectives. This interactive 3D-PDF provides virtual views of the ICA and becomes an innovative tool to improve the understanding of the neuroanatomy of the ICA and surrounding structures.
Augmented virtuality for arthroscopic knee surgery.
Li, John M; Bardana, Davide D; Stewart, A James
2011-01-01
This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.
Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H
2013-03-01
To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Three-dimensional simulation, surgical navigation and thoracoscopic lung resection
Kanzaki, Masato; Kikkawa, Takuma; Sakamoto, Kei; Maeda, Hideyuki; Wachi, Naoko; Komine, Hiroshi; Oyama, Kunihiro; Murasugi, Masahide; Onuki, Takamasa
2013-01-01
This report describes a 3-dimensional (3-D) video-assisted thoracoscopic lung resection guided by a 3-D video navigation system having a patient-specific 3-D reconstructed pulmonary model obtained by preoperative simulation. A 78-year-old man was found to have a small solitary pulmonary nodule in the left upper lobe in chest computed tomography. By a virtual 3-D pulmonary model the tumor was found to be involved in two subsegments (S1 + 2c and S3a). Complete video-assisted thoracoscopic surgery bi-subsegmentectomy was selected in simulation and was performed with lymph node dissection. A 3-D digital vision system was used for 3-D thoracoscopic performance. Wearing 3-D glasses, the patient's actual reconstructed 3-D model on 3-D liquid-crystal displays was observed, and the 3-D intraoperative field and the picture of 3-D reconstructed pulmonary model were compared. PMID:24964426
Kim, J; Lee, C; Chong, Y
2009-01-01
Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates.
Telearch - Integrated visual simulation environment for collaborative virtual archaeology.
NASA Astrophysics Data System (ADS)
Kurillo, Gregorij; Forte, Maurizio
Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.
NASA Astrophysics Data System (ADS)
Rautenbach, V.; Çöltekin, A.; Coetzee, S.
2015-08-01
In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, Birchard P; Michel, Kelly D; Few, Douglas A
From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less
Kiraly, Laszlo
2018-04-01
Three-dimensional (3D) modelling and printing methods greatly support advances in individualized medicine and surgery. In pediatric and congenital cardiac surgery, personalized imaging and 3D modelling presents with a range of advantages, e.g., better understanding of complex anatomy, interactivity and hands-on approach, possibility for preoperative surgical planning and virtual surgery, ability to assess expected results, and improved communication within the multidisciplinary team and with patients. 3D virtual and printed models often add important new anatomical findings and prompt alternative operative scenarios. For the lack of critical mass of evidence, controlled randomized trials, however, most of these general benefits remain anecdotal. For an individual surgical case-scenario, prior knowledge, preparedness and possibility of emulation are indispensable in raising patient-safety. It is advocated that added value of 3D printing in healthcare could be raised by establishment of a multidisciplinary centre of excellence (COE). Policymakers, research scientists, clinicians, as well as health care financers and local entrepreneurs should cooperate and communicate along a legal framework and established scientific guidelines for the clinical benefit of patients, and towards financial sustainability. It is expected that besides the proven utility of 3D printed patient-specific anatomical models, 3D printing will have a major role in pediatric and congenital cardiac surgery by providing individually customized implants and prostheses, especially in combination with evolving techniques of bioprinting.
2018-01-01
Three-dimensional (3D) modelling and printing methods greatly support advances in individualized medicine and surgery. In pediatric and congenital cardiac surgery, personalized imaging and 3D modelling presents with a range of advantages, e.g., better understanding of complex anatomy, interactivity and hands-on approach, possibility for preoperative surgical planning and virtual surgery, ability to assess expected results, and improved communication within the multidisciplinary team and with patients. 3D virtual and printed models often add important new anatomical findings and prompt alternative operative scenarios. For the lack of critical mass of evidence, controlled randomized trials, however, most of these general benefits remain anecdotal. For an individual surgical case-scenario, prior knowledge, preparedness and possibility of emulation are indispensable in raising patient-safety. It is advocated that added value of 3D printing in healthcare could be raised by establishment of a multidisciplinary centre of excellence (COE). Policymakers, research scientists, clinicians, as well as health care financers and local entrepreneurs should cooperate and communicate along a legal framework and established scientific guidelines for the clinical benefit of patients, and towards financial sustainability. It is expected that besides the proven utility of 3D printed patient-specific anatomical models, 3D printing will have a major role in pediatric and congenital cardiac surgery by providing individually customized implants and prostheses, especially in combination with evolving techniques of bioprinting. PMID:29770294
Determining of a robot workspace using the integration of a CAD system with a virtual control system
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2016-08-01
The paper presents a method for determining the workspace of an industrial robot using an approach consisting in integration a 3D model of an industrial robot with a virtual control system. The robot model with his work environment, prepared for motion simulation, was created in the “Motion Simulation” module of the Siemens PLM NX software. In the mentioned model components of the “link” type were created which map the geometrical form of particular elements of the robot and the components of “joint” type mapping way of cooperation of components of the “link” type. In the paper is proposed the solution in which the control process of a virtual robot is similar to the control process of a real robot using the manual control panel (teach pendant). For this purpose, the control application “JOINT” was created, which provides the manipulation of a virtual robot in accordance with its internal control system. The set of procedures stored in an .xlsx file is the element integrating the 3D robot model working in the CAD/CAE class system with the elaborated control application.
a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application
NASA Astrophysics Data System (ADS)
Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.
2017-11-01
Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.
The use of 3D-printed titanium mesh tray in treating complex comminuted mandibular fractures
Ma, Junli; Ma, Limin; Wang, Zhifa; Zhu, Xiongjie; Wang, Weijian
2017-01-01
Abstract Rationale: Precise bony reduction and reconstruction of optimal contour in treating comminuted mandibular fractures is very difficult using traditional techniques and devices. The aim of this report is to introduce our experiences in using virtual surgery and three-dimensional (3D) printing technique in treating this clinical challenge. Patient concerns: A 26-year-old man presented with severe trauma in the maxillofacial area due to fall from height. Diagnosis: Computed tomography images revealed middle face fractures and comminuted mandibular fracture including bilateral condyles. Interventions and outcomes: The computed tomography data was used to construct the 3D cranio-maxillofacial models; then the displaced bone fragments were virtually reduced. On the basis of the finalized model, a customized titanium mesh tray was designed and fabricated using selective laser melting technology. During the surgery, a submandibular approach was adopted to repair the mandibular fracture. The reduction and fixation were performed according to preoperative plan, the bone defects in the mental area were reconstructed with iliac bone graft. The 3D-printed mesh tray served as an intraoperative template and carrier of bone graft. The healing process was uneventful, and the patient was satisfied with the mandible contour. Lessons: Virtual surgical planning combined with 3D printing technology enables surgeon to visualize the reduction process preoperatively and guide intraoperative reduction, making the reduction less time consuming and more precise. 3D-printed titanium mesh tray can provide more satisfactory esthetic outcomes in treating complex comminuted mandibular fractures. PMID:28682875
3D printing from cardiovascular CT: a practical guide and review
Birbara, Nicolette S.; Hussain, Tarique; Greil, Gerald; Foley, Thomas A.; Pather, Nalini
2017-01-01
Current cardiovascular imaging techniques allow anatomical relationships and pathological conditions to be captured in three dimensions. Three-dimensional (3D) printing, or rapid prototyping, has also become readily available and made it possible to transform virtual reconstructions into physical 3D models. This technology has been utilised to demonstrate cardiovascular anatomy and disease in clinical, research and educational settings. In particular, 3D models have been generated from cardiovascular computed tomography (CT) imaging data for purposes such as surgical planning and teaching. This review summarises applications, limitations and practical steps required to create a 3D printed model from cardiovascular CT. PMID:29255693
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
PC-Based Virtual Reality for CAD Model Viewing
ERIC Educational Resources Information Center
Seth, Abhishek; Smith, Shana S.-F.
2004-01-01
Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…
Tran, Ngoc Hieu; Tantidhnazet, Syrina; Raocharernporn, Somchart; Kiattavornchareon, Sirichai; Pairuchvej, Verasak; Wongsirichat, Natthamet
2018-01-01
Background The benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS. Methods Fifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method. Results The virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria. Conclusion In this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS. PMID:29581806
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues
2014-10-28
Stereopsis, Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 16...Binocular Vision, Optometry , Depth Perception, 3D vision, 3D human factors, Stereoscopic displays, S3D, Virtual environment 1 Distribution A: Approved
NASA Astrophysics Data System (ADS)
Castagnetti, C.; Giannini, M.; Rivola, R.
2017-05-01
The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.
Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Lakkas, Lampros S; Nakatani, Shimpei; Bourantas, Christos V; Ligthart, Jurgen; Onuma, Yoshinobu; Echavarria-Pinto, Mauro; Tsirka, Georgia; Kotsia, Anna; Nikas, Dimitrios N; Mogabgab, Owen; van Geuns, Robert-Jan; Naka, Katerina K; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Zijlstra, Felix; Michalis, Lampros K; Serruys, Patrick W
2014-09-01
To develop a simplified approach of virtual functional assessment of coronary stenosis from routine angiographic data and test it against fractional flow reserve using a pressure wire (wire-FFR). Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by wire-FFR (reference standard: ≤0.80). The 3D-QCA models were processed with computational fluid dynamics (CFD) to calculate the lesion-specific pressure gradient (ΔP) and construct the ΔP-flow curve, from which the virtual functional assessment index (vFAI) was derived. The discriminatory power of vFAI for ischaemia- producing lesions was high (area under the receiver operator characteristic curve [AUC]: 92% [95% CI: 86-96%]). Diagnostic accuracy, sensitivity and specificity for the optimal vFAI cut-point (≤0.82) were 88%, 90% and 86%, respectively. Virtual-FAI demonstrated superior discrimination against 3D-QCA-derived % area stenosis (AUC: 78% [95% CI: 70- 84%]; p<0.0001 compared to vFAI). There was a close correlation (r=0.78, p<0.0001) and agreement of vFAI compared to wire-FFR (mean difference: -0.0039±0.085, p=0.59). We developed a fast and simple CFD-powered virtual haemodynamic assessment model using only routine angiography and without requiring any invasive physiology measurements/hyperaemia induction. Virtual-FAI showed a high diagnostic performance and incremental value to QCA for predicting wire-FFR; this "less invasive" approach could have important implications for patient management and cost.
DigBody®: A new 3D modeling tool for nasal virtual surgery.
Burgos, M A; Sanmiguel-Rojas, E; Singh, Narinder; Esteban-Ortega, F
2018-07-01
Recent studies have demonstrated that a significant number of surgical procedures for nasal airway obstruction (NAO) have a high rate of surgical failure. In part, this problem is due to the lack of reliable objective clinical parameters to aid surgeons during preoperative planning. Modeling tools that allow virtual surgery to be performed do exist, but all require direct manipulation of computed tomography (CT) or magnetic resonance imaging (MRI) data. Specialists in Rhinology have criticized these tools for their complex user interface, and have requested more intuitive, user-friendly and powerful software to make virtual surgery more accessible and realistic. In this paper we present a new virtual surgery software tool, DigBody ® . This new surgery module is integrated into the computational fluid dynamics (CFD) program MeComLand ® , which was developed exclusively to analyze nasal airflow. DigBody ® works directly with a 3D nasal model that mimics real surgery. Furthermore, this surgery module permits direct assessment of the operated cavity following virtual surgery by CFD simulation. The effectiveness of DigBody ® has been demonstrated by real surgery on two patients based on prior virtual operation results. Both subjects experienced excellent surgical outcomes with no residual nasal obstruction. This tool has great potential to aid surgeons in modeling potential surgical maneuvers, minimizing complications, and being confident that patients will receive optimal postoperative outcomes, validated by personalized CFD testing. Copyright © 2018 Elsevier Ltd. All rights reserved.
A novel technique for reference point generation to aid in intraoral scan alignment.
Renne, Walter G; Evans, Zachary P; Mennito, Anthony; Ludlow, Mark
2017-11-12
When using a completely digital workflow on larger prosthetic cases it is often difficult to communicate to the laboratory or chairside Computer Aided Design and Computer Aided Manufacturing system the provisional prosthetic information. The problem arises when common hard tissue data points are limited or non-existent such as in complete arch cases in which the 3D model of the complete arch provisional restorations must be aligned perfectly with the 3D model of the complete arch preparations. In these instances, soft tissue is not enough to ensure an accurate automatic or manual alignment due to a lack of well-defined reference points. A new technique is proposed for the proper digital alignment of the 3D virtual model of the provisional prosthetic to the 3D virtual model of the prepared teeth in cases where common and coincident hard tissue data points are limited. Clinical considerations: A technique is described in which fiducial composite resin dots are temporarily placed on the intraoral keratinized tissue in strategic locations prior to final impressions. These fiducial dots provide coincident and clear 3D data points that when scanned into a digital impression allow superimposition of the 3D models. Composite resin dots on keratinized tissue were successful at allowing accurate merging of provisional restoration and post-preparation 3D models for the purpose of using the provisional restorations as a guide for final CLINICAL SIGNIFICANCE: Composite resin dots placed temporarily on attached tissue were successful at allowing accurate merging of the provisional restoration 3D models to the preparation 3D models for the purposes of using the provisional restorations as a guide for final restoration design and manufacturing. In this case, they allowed precise superimposition of the 3D models made in the absence of any other hard tissue reference points, resulting in the fabrication of ideal final restorations. © 2017 Wiley Periodicals, Inc.
Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds
NASA Astrophysics Data System (ADS)
Minocha, Shailey; Reeves, Ahmad John
Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.
NASA Astrophysics Data System (ADS)
Stanga, C.; Spinelli, C.; Brumana, R.; Oreni, D.; Valente, R.; Banfi, F.
2017-08-01
This essay describes the combination of 3D solutions and software techniques with traditional studies and researches in order to achieve an integrated digital documentation between performed surveys, collected data, and historical research. The approach of this study is based on the comparison of survey data with historical research, and interpretations deduced from a data cross-check between the two mentioned sources. The case study is the Basilica of S. Ambrogio in Milan, one of the greatest monuments in the city, a pillar of the Christianity and of the History of Architecture. It is characterized by a complex stratification of phases of restoration and transformation. Rediscovering the great richness of the traditional architectural notebook, which collected surveys and data, this research aims to realize a virtual notebook, based on a 3D model that supports the dissemination of the collected information. It can potentially be understandable and accessible by anyone through the development of a mobile app. The 3D model was used to explore the different historical phases, starting from the recent layers to the oldest ones, through a virtual subtraction process, following the methods of Archaeology of Architecture. Its components can be imported into parametric software and recognized both in their morphological and typological aspects. It is based on the concept of LoD and ReverseLoD in order to fit the accuracy required by each step of the research.
Oh, Hyun Jun; Yang, Il-Hyung
2016-01-01
Objectives: To propose a novel method for determining the three-dimensional (3D) root apex position of maxillary teeth using a two-dimensional (2D) panoramic radiograph image and a 3D virtual maxillary cast model. Methods: The subjects were 10 adult orthodontic patients treated with non-extraction. The multiple camera matrices were used to define transformative relationships between tooth images of the 2D panoramic radiographs and the 3D virtual maxillary cast models. After construction of the root apex-specific projective (RASP) models, overdetermined equations were used to calculate the 3D root apex position with a direct linear transformation algorithm and the known 2D co-ordinates of the root apex in the panoramic radiograph. For verification of the estimated 3D root apex position, the RASP and 3D-CT models were superimposed using a best-fit method. Then, the values of estimation error (EE; mean, standard deviation, minimum error and maximum error) between the two models were calculated. Results: The intraclass correlation coefficient values exhibited good reliability for the landmark identification. The mean EE of all root apices of maxillary teeth was 1.88 mm. The EE values, in descending order, were as follows: canine, 2.30 mm; first premolar, 1.93 mm; second premolar, 1.91 mm; first molar, 1.83 mm; second molar, 1.82 mm; lateral incisor, 1.80 mm; and central incisor, 1.53 mm. Conclusions: Camera calibration technology allows reliable determination of the 3D root apex position of maxillary teeth without the need for 3D-CT scan or tooth templates. PMID:26317151
Using videogrammetry and 3D image reconstruction to identify crime suspects
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
The anthropometry and movements are unique for every individual human being. We identify persons we know by recognizing the way the look and move. By quantifying these measures and using image processing methods this method can serve as a tool in the work of the police as a complement to the ability of the human eye. The idea is to use virtual 3-D parameterized models of the human body to measure the anthropometry and movements of a crime suspect. The Swedish National Laboratory of Forensic Science in cooperation with SAAB Military Aircraft have developed methods for measuring the lengths of persons from video sequences. However, there is so much unused information in a digital image sequence from a crime scene. The main approach for this paper is to give an overview of the current research project at Linkoping University, Image Coding Group where methods to measure anthropometrical data and movements by using virtual 3-D parameterized models of the person in the crime scene are being developed. The length of an individual might vary up to plus or minus 10 cm depending on whether the person is in upright position or not. When measuring during the best available conditions, the length still varies within plus or minus 1 cm. Using a full 3-D model provides a rich set of anthropometric measures describing the person in the crime scene. Once having obtained such a model the movements can be quantified as well. The results depend strongly on the accuracy of the 3-D model and the strategy of having such an accurate 3-D model is to make one estimate per image frame by using 3-D scene reconstruction, and an averaged 3-D model as the final result from which the anthropometry and movements are calculated.
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Mobile Virtual Reality : A Solution for Big Data Visualization
NASA Astrophysics Data System (ADS)
Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.
2015-12-01
Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
ERIC Educational Resources Information Center
Zhong, Ying
2013-01-01
Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…
Exploring the User Experience of Three-Dimensional Virtual Learning Environments
ERIC Educational Resources Information Center
Shin, Dong-Hee; Biocca, Frank; Choo, Hyunseung
2013-01-01
This study examines the users' experiences with three-dimensional (3D) virtual environments to investigate the areas of development as a learning application. For the investigation, the modified technology acceptance model (TAM) is used with constructs from expectation-confirmation theory (ECT). Users' responses to questions about cognitive…
Hybrid 3D printing: a game-changer in personalized cardiac medicine?
Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J
2015-12-01
Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.
ERIC Educational Resources Information Center
Jensen, Jens F.
This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
Femur Model Reconstruction Based on Reverse Engineering and Rapid Prototyping
NASA Astrophysics Data System (ADS)
Tang, Tongming; Zhang, Zheng; Ni, Hongjun; Deng, Jiawen; Huang, Mingyu
Precise reconstruction of 3D models is fundamental and crucial to the researches of human femur. In this paper we present our approach towards tackling this problem. The surface of a human femur was scanned using a hand-held 3D laser scanner. The data obtained, in the form of point cloud, was then processed using the reverse engineering software Geomagic and the CAD/CAM software CimatronE to reconstruct a digital 3D model. The digital model was then used by the rapid prototyping machine to build a physical model of human femur using 3D printing. The geometric characteristics of the obtained physical model matched that of the original femur. The process of "physical object - 3D data - digital 3D model - physical model" presented in this paper provides a foundation of precise modeling for the digital manufacturing, virtual assembly, stress analysis, and simulated surgery of artificial bionic femurs.
Principles of three-dimensional printing and clinical applications within the abdomen and pelvis.
Bastawrous, Sarah; Wake, Nicole; Levin, Dmitry; Ripley, Beth
2018-04-04
Improvements in technology and reduction in costs have led to widespread interest in three-dimensional (3D) printing. 3D-printed anatomical models contribute to personalized medicine, surgical planning, and education across medical specialties, and these models are rapidly changing the landscape of clinical practice. A physical object that can be held in one's hands allows for significant advantages over standard two-dimensional (2D) or even 3D computer-based virtual models. Radiologists have the potential to play a significant role as consultants and educators across all specialties by providing 3D-printed models that enhance clinical care. This article reviews the basics of 3D printing, including how models are created from imaging data, clinical applications of 3D printing within the abdomen and pelvis, implications for education and training, limitations, and future directions.
ERIC Educational Resources Information Center
Omale, Nicholas; Hung, Wei-Chen; Luetkehans, Lara; Cooke-Plagwitz, Jessamine
2009-01-01
The purpose of this article is to present the results of a study conducted to investigate how the attributes of 3-D technology such as avatars, 3-D space, and comic style bubble dialogue boxes affect participants' social, cognitive, and teaching presences in a blended problem-based learning environment. The community of inquiry model was adopted…
Improved Virtual Planning for Bimaxillary Orthognathic Surgery.
Hatamleh, Muhanad; Turner, Catherine; Bhamrah, Gurprit; Mack, Gavin; Osher, Jonas
2016-09-01
Conventional model surgery planning for bimaxillary orthognathic surgery can be laborious, time-consuming and may contain potential errors; hence three-dimensional (3D) virtual orthognathic planning has been proven to be an efficient, reliable, and cost-effective alternative. In this report, the 3D planning is described for a patient presenting with a Class III incisor relationship on a Skeletal III base with pan facial asymmetry complicated by reverse overjet and anterior open bite. A combined scan data of direct cone beam computer tomography and indirect dental scan were used in the planning. Additionally, a new method of establishing optimum intercuspation by scanning dental casts in final occlusion and positioning it to the composite-scans model was shown. Furthermore, conventional model surgery planning was carried out following in-house protocol. Intermediate and final intermaxillary splints were produced following the conventional method and 3D printing. Three-dimensional planning showed great accuracy and treatment outcome and reduced laboratory time in comparison with the conventional method. Establishing the final dental occlusion on casts and integrating it in final 3D planning enabled us to achieve the best possible intercuspation.
NASA Astrophysics Data System (ADS)
Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.
1997-05-01
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Usability Evaluation of an Adaptive 3D Virtual Learning Environment
ERIC Educational Resources Information Center
Ewais, Ahmed; De Troyer, Olga
2013-01-01
Using 3D virtual environments for educational purposes is becoming attractive because of their rich presentation and interaction capabilities. Furthermore, dynamically adapting the 3D virtual environment to the personal preferences, prior knowledge, skills and competence, learning goals, and the personal or (social) context in which the learning…
Magic cards: a new augmented-reality approach.
Demuynck, Olivier; Menendez, José Manuel
2013-01-01
Augmented reality (AR) commonly uses markers for detection and tracking. Such multimedia applications associate each marker with a virtual 3D model stored in the memory of the camera-equipped device running the application. Application users are limited in their interactions, which require knowing how to design and program 3D objects. This generally prevents them from developing their own entertainment AR applications. The Magic Cards application solves this problem by offering an easy way to create and manage an unlimited number of virtual objects that are encoded on special markers.
Hwang, Minki; Song, Jun-Seop; Lee, Young-Seon; Li, Changyong; Shim, Eun Bo; Pak, Hui-Nam
2016-01-01
Although rotors have been considered among the drivers of atrial fibrillation (AF), the rotor definition is inconsistent. We evaluated the nature of rotors in 2D and 3D in- silico models of persistent AF (PeAF) by analyzing phase singularity (PS), dominant frequency (DF), Shannon entropy (ShEn), and complex fractionated atrial electrogram cycle length (CFAE-CL) and their ablation. Mother rotor was spatiotemporally defined as stationary reentries with a meandering tip remaining within half the wavelength and lasting longer than 5 s. We generated 2D- and 3D-maps of the PS, DF, ShEn, and CFAE-CL during AF. The spatial correlations and ablation outcomes targeting each parameter were analyzed. 1. In the 2D PeAF model, we observed a mother rotor that matched relatively well with DF (>9 Hz, 71.0%, p<0.001), ShEn (upper 2.5%, 33.2%, p<0.001), and CFAE-CL (lower 2.5%, 23.7%, p<0.001). 2. The 3D-PeAF model also showed mother rotors that had spatial correlations with DF (>5.5 Hz, 39.7%, p<0.001), ShEn (upper 8.5%, 15.1%, p <0.001), and CFAE (lower 8.5%, 8.0%, p = 0.002). 3. In both the 2D and 3D models, virtual ablation targeting the upper 5% of the DF terminated AF within 20 s, but not the ablations based on long-lasting PS, high ShEn area, or lower CFAE-CL area. Mother rotors were observed in both 2D and 3D human AF models. Rotor locations were well represented by DF, and their virtual ablation altered wave dynamics and terminated AF.
Hwang, Minki; Song, Jun-Seop; Lee, Young-Seon; Li, Changyong; Shim, Eun Bo; Pak, Hui-Nam
2016-01-01
Background Although rotors have been considered among the drivers of atrial fibrillation (AF), the rotor definition is inconsistent. We evaluated the nature of rotors in 2D and 3D in- silico models of persistent AF (PeAF) by analyzing phase singularity (PS), dominant frequency (DF), Shannon entropy (ShEn), and complex fractionated atrial electrogram cycle length (CFAE-CL) and their ablation. Methods Mother rotor was spatiotemporally defined as stationary reentries with a meandering tip remaining within half the wavelength and lasting longer than 5 s. We generated 2D- and 3D-maps of the PS, DF, ShEn, and CFAE-CL during AF. The spatial correlations and ablation outcomes targeting each parameter were analyzed. Results 1. In the 2D PeAF model, we observed a mother rotor that matched relatively well with DF (>9 Hz, 71.0%, p<0.001), ShEn (upper 2.5%, 33.2%, p<0.001), and CFAE-CL (lower 2.5%, 23.7%, p<0.001). 2. The 3D-PeAF model also showed mother rotors that had spatial correlations with DF (>5.5 Hz, 39.7%, p<0.001), ShEn (upper 8.5%, 15.1%, p <0.001), and CFAE (lower 8.5%, 8.0%, p = 0.002). 3. In both the 2D and 3D models, virtual ablation targeting the upper 5% of the DF terminated AF within 20 s, but not the ablations based on long-lasting PS, high ShEn area, or lower CFAE-CL area. Conclusion Mother rotors were observed in both 2D and 3D human AF models. Rotor locations were well represented by DF, and their virtual ablation altered wave dynamics and terminated AF. PMID:26909492
Zeng, Canjun; Xing, Weirong; Wu, Zhanglin; Huang, Huajun; Huang, Wenhua
2016-10-01
Treatment of acetabular fractures remains one of the most challenging tasks that orthopaedic surgeons face. An accurate assessment of the injuries and preoperative planning are essential for an excellent reduction. The purpose of this study was to evaluate the feasibility, accuracy and effectiveness of performing 3D printing technology and computer-assisted virtual surgical procedures for preoperative planning in acetabular fractures. We hypothesised that more accurate preoperative planning using 3D printing models will reduce the operation time and significantly improve the outcome of acetabular fracture repair. Ten patients with acetabular fractures were recruited prospectively and examined by CT scanning. A 3-D model of each acetabular fracture was reconstructed with MIMICS14.0 software from the DICOM file of the CT data. Bone fragments were moved and rotated to simulate fracture reduction and restore the pelvic integrity with virtual fixation. The computer-assisted 3D image of the reduced acetabula was printed for surgery simulation and plate pre-bending. The postoperative CT scan was performed to compare the consistency of the preoperative planning with the surgical implants by 3D-superimposition in MIMICS14.0, and evaluated by Matta's method. Computer-based pre-operations were precisely mimicked and consistent with the actual operations in all cases. The pre-bent fixation plates had an anatomical shape specifically fit to the individual pelvis without further bending or adjustment at the time of surgery and fracture reductions were significantly improved. Seven out of 10 patients had a displacement of fracture reduction of less than 1mm; 3 cases had a displacement of fracture reduction between 1 and 2mm. The 3D printing technology combined with virtual surgery for acetabular fractures is feasible, accurate, and effective leading to improved patient-specific preoperative planning and outcome of real surgery. The results provide useful technical tips in planning pelvic surgeries. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Palestini, C.; Basso, A.
2017-11-01
In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.
A web-system of virtual morphometric globes
NASA Astrophysics Data System (ADS)
Florinsky, Igor; Garov, Andrei; Karachevtseva, Irina
2017-04-01
Virtual globes — programs implementing interactive three-dimensional (3D) models of planets — are increasingly used in geo- and planetary sciences. We develop a web-system of virtual morphometric globes. As the initial data, we used the following global digital elevation models (DEMs): (1) a DEM of the Earth extracted from SRTM30_PLUS database; (2) a DEM of Mars extracted from the Mars Orbiter Laser Altimeter (MOLA) gridded data record archive; and (3) A DEM of the Moon extracted from the Lunar Orbiter Laser Altimeter (LOLA) gridded data record archive. From these DEMs, we derived global digital models of the following 16 local, nonlocal, and combined morphometric variables: horizontal curvature, vertical curvature, mean curvature, Gaussian curvature, minimal curvature, maximal curvature, unsphericity curvature, difference curvature, vertical excess curvature, horizontal excess curvature, ring curvature, accumulation curvature, catchment area, dispersive area, topographic index, and stream power index (definitions, formulae, and interpretations can be found elsewhere [1]). To calculate local morphometric variables, we applied a finite-difference method intended for spheroidal equal angular grids [1]. Digital models of a nonlocal and combined morphometric variables were derived by a method of Martz and de Jong adapted to spheroidal equal angular grids [1]. DEM processing was performed in the software LandLord [1]. The calculated morphometric models were integrated into the testing version of the system. The following main functions are implemented in the system: (1) selection of a celestial body; (2) selection of a morphometric variable; (3) 2D visualization of a calculated global morphometric model (a map in equirectangular projection); (4) 3D visualization of a calculated global morphometric model on the sphere surface (a globe by itself); (5) change of a globe scale (zooming); and (6) globe rotation by an arbitrary angle. The testing version of the system represents morphometric models with the resolution of 15'. In the final version of the system, we plan to implement a multiscale 3D visualization for models of 17 morphometric variables with the resolution from 15' to 30". The web-system of virtual morphometric globes is designed as a separate unit of a 3D web GIS for storage, processing, and access to planetary data [2], which is currently developed as an extension of an existing 2D web GIS (http://cartsrv.mexlab.ru/geoportal). Free, real-time web access to the system of virtual globes will be provided. The testing version of the system is available at: http://cartsrv.mexlab.ru/virtualglobe. The study is supported by the Russian Foundation for Basic Research, grant 15-07-02484. References 1. Florinsky, I.V., 2016. Digital Terrain Analysis in Soil Science and Geology. 2nd ed. Academic Press, Amsterdam, 486 p. 2. Garov, A.S., Karachevtseva, I.P., Matveev, E.V., Zubarev, A.E., and Florinsky, I.V., 2016. Development of a heterogenic distributed environment for spatial data processing using cloud technologies. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41(B4): 385-390.
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
ERIC Educational Resources Information Center
Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako
2009-01-01
This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…
Modeling human behaviors and reactions under dangerous environment.
Kang, J; Wright, D K; Qin, S F; Zhao, Y
2005-01-01
This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.
Infusion of a Gaming Paradigm into Computer-Aided Engineering Design Tools
2012-05-03
Virtual Test Bed (VTB), and the gaming tool, Unity3D . This hybrid gaming environment coupled a three-dimensional (3D) multibody vehicle system model...from Google Earth to the 3D visual front-end fabricated around Unity3D . The hybrid environment was sufficiently developed to support analyses of the...ndFr Cti3r4 G’OjrdFr ctior-2 The VTB simulation of the vehicle dynamics ran concurrently with and interacted with the gaming engine, Unity3D which
ERIC Educational Resources Information Center
Wang, Shwu-huey
2012-01-01
In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…
Options in virtual 3D, optical-impression-based planning of dental implants.
Reich, Sven; Kern, Thomas; Ritter, Lutz
2014-01-01
If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow.
Accuracy of open-source software segmentation and paper-based printed three-dimensional models.
Szymor, Piotr; Kozakiewicz, Marcin; Olszewski, Raphael
2016-02-01
In this study, we aimed to verify the accuracy of models created with the help of open-source Slicer 3.6.3 software (Surgical Planning Lab, Harvard Medical School, Harvard University, Boston, MA, USA) and the Mcor Matrix 300 paper-based 3D printer. Our study focused on the accuracy of recreating the walls of the right orbit of a cadaveric skull. Cone beam computed tomography (CBCT) of the skull was performed (0.25-mm pixel size, 0.5-mm slice thickness). Acquired DICOM data were imported into Slicer 3.6.3 software, where segmentation was performed. A virtual model was created and saved as an .STL file and imported into Netfabb Studio professional 4.9.5 software. Three different virtual models were created by cutting the original file along three different planes (coronal, sagittal, and axial). All models were printed with a Selective Deposition Lamination Technology Matrix 300 3D printer using 80 gsm A4 paper. The models were printed so that their cutting plane was parallel to the paper sheets creating the model. Each model (coronal, sagittal, and axial) consisted of three separate parts (∼200 sheets of paper each) that were glued together to form a final model. The skull and created models were scanned with a three-dimensional (3D) optical scanner (Breuckmann smart SCAN) and were saved as .STL files. Comparisons of the orbital walls of the skull, the virtual model, and each of the three paper models were carried out with GOM Inspect 7.5SR1 software. Deviations measured between the models analysed were presented in the form of a colour-labelled map and covered with an evenly distributed network of points automatically generated by the software. An average of 804.43 ± 19.39 points for each measurement was created. Differences measured in each point were exported as a .csv file. The results were statistically analysed using Statistica 10, with statistical significance set at p < 0.05. The average number of points created on models for each measurement was 804.43 ± 19.39; however, deviation in some of the generated points could not be calculated, and those points were excluded from further calculations. From 94% to 99% of the measured absolute deviations were <1 mm. The mean absolute deviation between the skull and virtual model was 0.15 ± 0.11 mm, between the virtual and printed models was 0.15 ± 0.12 mm, and between the skull and printed models was 0.24 ± 0.21 mm. Using the optical scanner and specialized inspection software for measurements of accuracy of the created parts is recommended, as it allows one not only to measure 2-dimensional distances between anatomical points but also to perform more clinically suitable comparisons of whole surfaces. However, it requires specialized software and a very accurate scanner in order to be useful. Threshold-based, manually corrected segmentation of orbital walls performed with 3D Slicer software is accurate enough to be used for creating a virtual model of the orbit. The accuracy of the paper-based Mcor Matrix 300 3D printer is comparable to those of other commonly used 3-dimensional printers and allows one to create precise anatomical models for clinical use. The method of dividing the model into smaller parts and sticking them together seems to be quite accurate, although we recommend it only for creating small, solid models with as few parts as possible to minimize shift associated with gluing. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
NASA Astrophysics Data System (ADS)
Aditya, B. R.; Permadi, A.
2018-03-01
This paper describes implementation of Unified Theory of Acceptance and User of Technology (UTAUT) model to assess the use of virtual classroom in support of teaching and learning in higher education. The purpose of this research is how virtual classroom that has fulfilled the basic principle can be accepted and used by students positively. This research methodology uses the quantitative and descriptive approach with a questionnaire as a tool for measuring the height of virtual classroom principle acception. This research uses a sample of 105 students in D3 Informatics Management at Telkom University. The result of this research is that the use of classroom virtual principle are positive and relevant to the students in higher education.
The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning
ERIC Educational Resources Information Center
Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos
2004-01-01
This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…
3D virtual human atria: A computational platform for studying clinical atrial fibrillation.
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-10-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and AF arrhythmogenesis. Results of such simulations can be directly compared with electrophysiological and endocardial mapping data, as well as clinical ECG recordings. The virtual human atria can provide in-depth insights into 3D excitation propagation processes within atrial walls of a whole heart in vivo, which is beyond the current technical capabilities of experimental or clinical set-ups. Copyright © 2011 Elsevier Ltd. All rights reserved.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
Novel interactive virtual showcase based on 3D multitouch technology
NASA Astrophysics Data System (ADS)
Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian
2009-11-01
A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.
Just Do It Yourself: Implementing 3D Printing in a Deployed Environment
2017-04-01
This 3D model data can be stored for future manufacturing or manipulated, using software, to improve the parts’ design .8 3D manufactured parts can be...be developed and tested in a virtual environment, very quickly, and before manufacturing has commenced. Additionally, these 3D designs can be...capitalize on this innovative technology. Consequently, AM may offer the best hope for designing a reusable hypersonic weapon. Traditional manufacturing
The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers
ERIC Educational Resources Information Center
Can, Tuncer; Simsek, Irfan
2015-01-01
The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction
NASA Astrophysics Data System (ADS)
Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.
2018-05-01
Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.
Landes, Constantin A; Weichert, Frank; Geis, Philipp; Helga, Fritsch; Wagner, Mathias
2006-03-01
Cleft lip and palate reconstructive surgery requires thorough knowledge of normal and pathological labial, palatal, and velopharyngeal anatomy. This study compared two software algorithms and their 3D virtual anatomical reconstruction because exact 3D micromorphological reconstruction may improve learning, reveal spatial relationships, and provide data for mathematical modeling. Transverse and frontal serial sections of the midface of 18 fetal specimens (11th to 32nd gestational week) were used for two manual segmentation approaches. The first manual segmentation approach used bitmap images and either Windows-based or Mac-based SURFdriver commercial software that allowed manual contour matching, surface generation with average slice thickness, 3D triangulation, and real-time interactive virtual 3D reconstruction viewing. The second manual segmentation approach used tagged image format and platform-independent prototypical SeViSe software developed by one of the authors (F.W.). Distended or compressed structures were dynamically transformed. Registration was automatic but allowed manual correction, such as individual section thickness, surface generation, and interactive virtual 3D real-time viewing. SURFdriver permitted intuitive segmentation, easy manual offset correction, and the reconstruction showed complex spatial relationships in real time. However, frequent software crashes and erroneous landmarks appearing "out of the blue," requiring manual correction, were tedious. Individual section thickness, defined smoothing, and unlimited structure number could not be integrated. The reconstruction remained underdimensioned and not sufficiently accurate for this study's reconstruction problem. SeViSe permitted unlimited structure number, late addition of extra sections, and quantified smoothing and individual slice thickness; however, SeViSe required more elaborate work-up compared to SURFdriver, yet detailed and exact 3D reconstructions were created.
Application of computer virtual simulation technology in 3D animation production
NASA Astrophysics Data System (ADS)
Mo, Can
2017-11-01
In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.
Codd, Anthony M; Choudhury, Bipasha
2011-01-01
The use of cadavers to teach anatomy is well established, but limitations with this approach have led to the introduction of alternative teaching methods. One such method is the use of three-dimensional virtual reality computer models. An interactive, three-dimensional computer model of human forearm anterior compartment musculoskeletal anatomy was produced using the open source 3D imaging program "Blender." The aim was to evaluate the use of 3D virtual reality when compared with traditional anatomy teaching methods. Three groups were identified from the University of Manchester second year Human Anatomy Research Skills Module class: a "control" group (no prior knowledge of forearm anatomy), a "traditional methods" group (taught using dissection and textbooks), and a "model" group (taught solely using e-resource). The groups were assessed on anatomy of the forearm by a ten question practical examination. ANOVA analysis showed the model group mean test score to be significantly higher than the control group (mean 7.25 vs. 1.46, P < 0.001) and not significantly different to the traditional methods group (mean 6.87, P > 0.5). Feedback from all users of the e-resource was positive. Virtual reality anatomy learning can be used to compliment traditional teaching methods effectively. Copyright © 2011 American Association of Anatomists.
Use of camera drive in stereoscopic display of learning contents of introductory physics
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2011-03-01
Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2017-02-01
This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition
NASA Astrophysics Data System (ADS)
Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro
This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.
[Preliminary use of HoloLens glasses in surgery of liver cancer].
Shi, Lei; Luo, Tao; Zhang, Li; Kang, Zhongcheng; Chen, Jie; Wu, Feiyue; Luo, Jia
2018-05-28
To establish the preoperative three dimensional (3D) model of liver cancer, and to precisely match the preoperative planning with the target organs during the operation. Methods: The 3D model reconstruction based on magnetic resonance data, which was combined with virtual reality technology via HoloLens glasses, was applied in the operation of liver cancer to achieve preoperative 3D modeling and surgical planning, and to directly match it with the operative target organs during operation. Results: The 3D model reconstruction of liver cancer based on magnetic resonance data was completed. The exact match with the target organ was performed during the operation via HoloLens glasses leaded by the 3D model. Conclusion: Magnetic resonance data can be used for the 3D model reconstruction to improve preoperative assessment and accurate match during the operation.
Designing Virtual Museum Using Web3D Technology
NASA Astrophysics Data System (ADS)
Zhao, Jianghai
VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.
Virtual reality hardware for use in interactive 3D data fusion and visualization
NASA Astrophysics Data System (ADS)
Gourley, Christopher S.; Abidi, Mongi A.
1997-09-01
Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.
NASA Astrophysics Data System (ADS)
Canevese, E. P.; De Gottardo, T.
2017-05-01
The morphometric and photogrammetric knowledge, combined with the historical research, are the indispensable prerequisites for the protection and enhancement of historical, architectural and cultural heritage. Nowadays the use of BIM (Building Information Modeling) as a supporting tool for restoration and conservation purposes is becoming more and more popular. However this tool is not fully adequate in this context because of its simplified representation of three-dimensional models, resulting from solid modelling techniques (mostly used in virtual reality) causing the loss of important morphometric information. One solution to this problem is imagining new advanced tools and methods that enable the building of effective and efficient three-dimensional representations backing the correct geometric analysis of the built model. Twenty-year of interdisciplinary research activities implemented by Virtualgeo focused on developing new methods and tools for 3D modeling that go beyond the simplified digital-virtual reconstruction used in standard solid modeling. Methods and tools allowing the creation of informative and true to life three-dimensional representations, that can be further used by various academics or industry professionals to carry out diverse analysis, research and design activities. Virtualgeo applied research activities, in line with the European Commission 2013's directives of Reflective 7 - Horizon 2020 Project, gave birth to GeomaticsCube Ecosystem, an ecosystem resulting from different technologies based on experiences garnered from various fields, metrology in particular, a discipline used in the automotive and aviation industry, and in general mechanical engineering. The implementation of the metrological functionality is only possible if the 3D model is created with special modeling techniques, based on surface modeling that allow, as opposed to solid modeling, a 3D representation of the manufact that is true to life. The advantages offered by metrological analysis are varied and important because they permit a precise and detailed overview of the 3D model's characteristics, and especially the over time monitoring of the model itself, these informations are impossible to obtain from a three-dimensional representation produced with solid modelling techniques. The applied research activities are also focused on the possibility of obtaining a photogrammetric and informative 3D model., Two distinct applications have been developed for this purpose, the first allows the classification of each individual element and the association of its material characteristics during the 3D modelling phase, whilst the second allows segmentations of the photogrammetric 3D model in its diverse aspects (materic, related to decay, chronological) with the possibility to make use and to populate the database, associated with the 3D model, with all types of multimedia contents.
3D Virtual Learning Environments in Education: A Meta-Review
ERIC Educational Resources Information Center
Reisoglu, I.; Topu, B.; Yilmaz, R.; Karakus Yilmaz, T.; Göktas, Y.
2017-01-01
The aim of this study is to investigate recent empirical research studies about 3D virtual learning environments. A total of 167 empirical studies that involve the use of 3D virtual worlds in education were examined by meta-review. Our findings show that the "Second Life" platform has been frequently used in studies. Among the reviewed…
Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals
ERIC Educational Resources Information Center
Burton, Brian G.; Martin, Barbara N.
2010-01-01
The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…
Swennen, Gwen R J
2014-11-01
The purpose of this article is to evaluate the timing for three-dimensional (3D) virtual treatment planning of orthognathic surgery in the daily clinical routine. A total of 350 consecutive patients were included in this study. All patients were scanned following the standardized "Triple CBCT Scan Protocol" in centric relation. Integrated 3D virtual planning and actual surgery were performed by the same surgeon in all patients. Although clinically acceptable, still software improvements especially toward 3D virtual occlusal definition are mandatory to make 3D virtual planning of orthognathic surgery less time-consuming and more user-friendly to the clinician. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jiman, Juhanita
This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…
ILP-2 modeling and virtual screening of an FDA-approved library:a possible anticancer therapy.
Khalili, Saeed; Mohammadpour, Hemn; Shokrollahi Barough, Mahideh; Kokhaei, Parviz
2016-06-23
The members of the inhibitors of apoptosis protein (IAP) family inhibit diverse components of the caspase signaling pathway, notably caspase 3, 7, and 9. ILP-2 (BIRC-8) is the most recently identified member of the IAPs, mainly interacting with caspase 9. This interaction would eventually lead to death resistance in the case of cancerous cells. Therefore, structural modeling of ILP-2 and finding applicable inhibitors of its interaction with caspase 9 are a compelling challenge. Three main protein modeling approaches along with various model refinement measures were harnessed to achieve a reliable 3D model, using state-of-the-art software. Thereafter, the selected model was employed to perform virtual screening of an FDA approved library. A model built by a combinatorial approach (homology and ab initio approaches) was chosen as the best model. Model refinement processes successfully bolstered the model quality. Virtual screening of the compound library introduced several high affinity inhibitor candidates that interact with functional residues of ILP2. Given the 3D structure of the ILP2 molecule, we found promising inhibitory molecules. In addition to high affinity towards the ILP2 molecule, these molecules interact with residues that play pivotal rules in ILP2-caspase interaction. These molecules would inhibit ILP2-caspase interaction and consequently would lead to reactivated cell apoptosis through the caspases pathway.
Development of visual 3D virtual environment for control software
NASA Technical Reports Server (NTRS)
Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence
1991-01-01
Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.
Hierarchical Task Network Prototyping In Unity3d
2016-06-01
visually debug. Here we present a solution for prototyping HTNs by extending an existing commercial implementation of Behavior Trees within the Unity3D game ...HTN, dynamic behaviors, behavior prototyping, agent-based simulation, entity-level combat model, game engine, discrete event simulation, virtual...commercial implementation of Behavior Trees within the Unity3D game engine prior to building the HTN in COMBATXXI. Existing HTNs were emulated within
NASA Astrophysics Data System (ADS)
Wang, Hujun; Liu, Jinghua; Zheng, Xu; Rong, Xiaohui; Zheng, Xuwei; Peng, Hongyu; Silber-Li, Zhanghua; Li, Mujun; Liu, Liyu
2015-06-01
Percutaneous coronary intervention (PCI), especially coronary stent implantation, has been shown to be an effective treatment for coronary artery disease. However, in-stent restenosis is one of the longstanding unsolvable problems following PCI. Although stents implanted inside narrowed vessels recover normal flux of blood flows, they instantaneously change the wall shear stress (WSS) distribution on the vessel surface. Improper stent implantation positions bring high possibilities of restenosis as it enlarges the low WSS regions and subsequently stimulates more epithelial cell outgrowth on vessel walls. To optimize the stent position for lowering the risk of restenosis, we successfully established a digital three-dimensional (3-D) model based on a real clinical coronary artery and analysed the optimal stenting strategies by computational simulation. Via microfabrication and 3-D printing technology, the digital model was also converted into in vitro microfluidic models with 3-D micro channels. Simultaneously, physicians placed real stents inside them; i.e., they performed “virtual surgeries”. The hydrodynamic experimental results showed that the microfluidic models highly inosculated the simulations. Therefore, our study not only demonstrated that the half-cross stenting strategy could maximally reduce restenosis risks but also indicated that 3-D printing combined with clinical image reconstruction is a promising method for future angiocardiopathy research.
From experimental imaging techniques to virtual embryology.
Weninger, Wolfgang J; Tassy, Olivier; Darras, Sébastien; Geyer, Stefan H; Thieffry, Denis
2004-01-01
Modern embryology increasingly relies on descriptive and functional three dimensional (3D) and four dimensional (4D) analysis of physically, optically, or virtually sectioned specimens. To cope with the technical requirements, new methods for high detailed in vivo imaging, as well as the generation of high resolution digital volume data sets for the accurate visualisation of transgene activity and gene product presence, in the context of embryo morphology, were recently developed and are under construction. These methods profoundly change the scientific applicability, appearance and style of modern embryo representations. In this paper, we present an overview of the emerging techniques to create, visualise and administrate embryo representations (databases, digital data sets, 3-4D embryo reconstructions, models, etc.), and discuss the implications of these new methods on the work of modern embryologists, including, research, teaching, the selection of specific model organisms, and potential collaborators.
NASA Astrophysics Data System (ADS)
Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea
2016-05-01
Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.
Grasping trajectories in a virtual environment adhere to Weber's law.
Ozana, Aviad; Berman, Sigal; Ganel, Tzvi
2018-06-01
Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
The Modeling of Virtual Environment Distance Education
NASA Astrophysics Data System (ADS)
Xueqin, Chang
This research presented a virtual environment that integrates in a virtual mockup services available in a university campus for students and teachers communication in different actual locations. Advantages of this system include: the remote access to a variety of services and educational tools, the representation of real structures and landscapes in an interactive 3D model that favors localization of services and preserves the administrative organization of the university. For that, the system was implemented a control access for users and an interface to allow the use of previous educational equipments and resources not designed for distance education mode.
Real engineering in a virtual world
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deitz, D.
1995-07-01
VR technology can be thought of as the next point on a continuum that leads from 1-D data (such as the text and numbers on a finite element analysis printout), through 2-D drawings and 3-D solid models to 4-D digital prototypes that eventually will have texture and weight and can be held in one`s hand. If it lives up to its potential, VR could become just another tool--like 3-D CAD/CAM systems and FEA software--that can be used to pursue continuous improvements in design and manufacturing processes. For example VR could help manufacturers reduce the number of prototypes and engineering changemore » orders (ECOs) generated during the product life cycle. Virtual reality could also be used to promote concurrent engineering. Because realistic virtual models are easier to interpret and interrogate than 2-D drawings or even 3-D solid models, they have the potential to simplify design reviews. They could also make it easier for non-engineers (such as salespeople and potential customers) to contribute to the design process. VR technology still has a way to go before it becomes a standard engineering tool, however. Peripheral devices are still being perfected, and engineers seem to agree that the jury`s still out on which peripherals are most appropriate for which applications. Further, advanced VR applications are largely confined to research and development departments of large corporations or to public and private research centers. Finally, potential users will have to wait a few years before desktop computers are powerful enough to run such applications--and inexpensive enough to survive a cost-benefit analysis.« less
Hung, Chun-Chi; Li, Yuan-Ta; Chou, Yu-Ching; Chen, Jia-En; Wu, Chia-Chun; Shen, Hsain-Chung; Yeh, Tsu-Te
2018-05-03
Treating pelvic fractures remains a challenging task for orthopaedic surgeons. We aimed to evaluate the feasibility, accuracy, and effectiveness of three-dimensional (3D) printing technology and computer-assisted virtual surgery for pre-operative planning in anterior ring fractures of the pelvis. We hypothesized that using 3D printing models would reduce operation time and significantly improve the surgical outcomes of pelvic fracture repair. We retrospectively reviewed the records of 30 patients with pelvic fractures treated by anterior pelvic fixation with locking plates (14 patients, conventional locking plate fixation; 16 patients, pre-operative virtual simulation with 3D, printing-assisted, pre-contoured, locking plate fixation). We compared operative time, instrumentation time, blood loss, and post-surgical residual displacements, as evaluated on X-ray films, among groups. Statistical analyses evaluated significant differences between the groups for each of these variables. The patients treated with the virtual simulation and 3D printing-assisted technique had significantly shorter internal fixation times, shorter surgery duration, and less blood loss (- 57 minutes, - 70 minutes, and - 274 ml, respectively; P < 0.05) than patients in the conventional surgery group. However, the post-operative radiological result was similar between groups (P > 0.05). The complication rate was less in the 3D printing group (1/16 patients) than in the conventional surgery group (3/14 patients). The 3D simulation and printing technique is an effective and reliable method for treating anterior pelvic ring fractures. With precise pre-operative planning and accurate execution of the procedures, this time-saving approach can provide a more personalized treatment plan, allowing for a safer orthopaedic surgery.
Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis
ERIC Educational Resources Information Center
Chow, Meyrick
2016-01-01
There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…
A rapid algorithm for realistic human reaching and its use in a virtual reality system
NASA Technical Reports Server (NTRS)
Aldridge, Ann; Pandya, Abhilash; Goldsby, Michael; Maida, James
1994-01-01
The Graphics Analysis Facility (GRAF) at JSC has developed a rapid algorithm for computing realistic human reaching. The algorithm was applied to GRAF's anthropometrically correct human model and used in a 3D computer graphics system and a virtual reality system. The nature of the algorithm and its uses are discussed.
Innovation in prediction planning for anterior open bite correction.
Almuzian, Mohammed; Almukhtar, Anas; O'Neil, Michael; Benington, Philip; Al Anezi, Thamer; Ayoub, Ashraf
2015-05-01
This study applies recent advances in 3D virtual imaging for application in the prediction planning of dentofacial deformities. Stereo-photogrammetry has been used to create virtual and physical models, which are creatively combined in planning the surgical correction of anterior open bite. The application of these novel methods is demonstrated through the surgical correction of a case.
NASA Astrophysics Data System (ADS)
Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha
2003-01-01
Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.
Visualization and dissemination of global crustal models on virtual globes
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Pan, Xin; Sun, Jian-zhong
2016-05-01
Global crustal models, such as CRUST 5.1 and its descendants, are very useful in a broad range of geoscience applications. The current method for representing the existing global crustal models relies heavily on dedicated computer programs to read and work with those models. Therefore, it is not suited to visualize and disseminate global crustal information to non-geological users. This shortcoming is becoming obvious as more and more people from both academic and non-academic institutions are interested in understanding the structure and composition of the crust. There is a pressing need to provide a modern, universal and user-friendly method to represent and visualize the existing global crustal models. In this paper, we present a systematic framework to easily visualize and disseminate the global crustal structure on virtual globes. Based on crustal information exported from the existing global crustal models, we first create a variety of KML-formatted crustal models with different levels of detail (LODs). And then the KML-formatted models can be loaded into a virtual globe for 3D visualization and model dissemination. A Keyhole Markup Language (KML) generator (Crust2KML) is developed to automatically convert crustal information obtained from the CRUST 1.0 model into KML-formatted global crustal models, and a web application (VisualCrust) is designed to disseminate and visualize those models over the Internet. The presented framework and associated implementations can be conveniently exported to other applications to support visualizing and analyzing the Earth's internal structure on both regional and global scales in a 3D virtual-globe environment.
Samothrakis, S; Arvanitis, T N; Plataniotis, A; McNeill, M D; Lister, P F
1997-11-01
Virtual Reality Modelling Language (VRML) is the start of a new era for medicine and the World Wide Web (WWW). Scientists can use VRML across the Internet to explore new three-dimensional (3D) worlds, share concepts and collaborate together in a virtual environment. VRML enables the generation of virtual environments through the use of geometric, spatial and colour data structures to represent 3D objects and scenes. In medicine, researchers often want to interact with scientific data, which in several instances may also be dynamic (e.g. MRI data). This data is often very large and is difficult to visualise. A 3D graphical representation can make the information contained in such large data sets more understandable and easier to interpret. Fast networks and satellites can reliably transfer large data sets from computer to computer. This has led to the adoption of remote tale-working in many applications including medical applications. Radiology experts, for example, can view and inspect in near real-time a 3D data set acquired from a patient who is in another part of the world. Such technology is destined to improve the quality of life for many people. This paper introduces VRML (including some technical details) and discusses the advantages of VRML in application developing.
Web 3D for Public, Environmental and Occupational Health: Early Examples from Second Life®
Kamel Boulos, Maged N.; Ramloll, Rameshsharma; Jones, Ray; Toth-Cohen, Susan
2008-01-01
Over the past three years (2006–2008), the medical/health and public health communities have shown a growing interest in using online 3D virtual worlds like Second Life® (http://secondlife.com/) for health education, community outreach, training and simulations purposes. 3D virtual worlds are seen as the precursors of ‘Web 3D’, the next major iteration of the Internet that will follow in the coming years. This paper provides a tour of several flagship Web 3D experiences in Second Life®, including Play2Train Islands (emergency preparedness training), the US Centers for Disease Control and Prevention—CDC Island (public health), Karuna Island (AIDS support and information), Tox Town at Virtual NLM Island (US National Library of Medicine - environmental health), and Jefferson’s Occupational Therapy Center. We also discuss the potential and future of Web 3D. These are still early days of 3D virtual worlds, and there are still many more untapped potentials and affordances of 3D virtual worlds that are yet to be explored, as the technology matures further and improves over the coming months and years. PMID:19190358
Reverse engineering--rapid prototyping of the skull in forensic trauma analysis.
Kettner, Mattias; Schmidt, Peter; Potente, Stefan; Ramsthaler, Frank; Schrodt, Michael
2011-07-01
Rapid prototyping (RP) comprises a variety of automated manufacturing techniques such as selective laser sintering (SLS), stereolithography, and three-dimensional printing (3DP), which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. Despite a growing demand for (virtual) reconstruction models in daily forensic casework, maceration of the skull is frequently assigned to ensure haptic evidence presentation in the courtroom. Owing to the progress in the field of forensic radiology, 3D data sets of relevant cases are usually available to the forensic expert. Here, we present a first application of RP in forensic medicine using computed tomography scans for the fabrication of an SLS skull model in a case of fatal hammer impacts to the head. The report is intended to show that this method fully respects the dignity of the deceased and is consistent with medical ethics but nevertheless provides an excellent 3D impression of anatomical structures and injuries. © 2011 American Academy of Forensic Sciences.
Web3D Technologies in Learning, Education and Training: Motivations, Issues, Opportunities
ERIC Educational Resources Information Center
Chittaro, Luca; Ranon, Roberto
2007-01-01
Web3D open standards allow the delivery of interactive 3D virtual learning environments through the Internet, reaching potentially large numbers of learners worldwide, at any time. This paper introduces the educational use of virtual reality based on Web3D technologies. After briefly presenting the main Web3D technologies, we summarize the…
NASA Astrophysics Data System (ADS)
Yu, Miao; Gu, Qiong; Xu, Jun
2018-02-01
PI3Kα is a promising drug target for cancer chemotherapy. In this paper, we report a strategy of combing ligand-based and structure-based virtual screening to identify new PI3Kα inhibitors. First, naïve Bayesian (NB) learning models and a 3D-QSAR pharmacophore model were built based upon known PI3Kα inhibitors. Then, the SPECS library was screened by the best NB model. This resulted in virtual hits, which were validated by matching the structures against the pharmacophore models. The pharmacophore matched hits were then docked into PI3Kα crystal structures to form ligand-receptor complexes, which are further validated by the Glide-XP program to result in structural validated hits. The structural validated hits were examined by PI3Kα inhibitory assay. With this screening protocol, ten PI3Kα inhibitors with new scaffolds were discovered with IC50 values ranging 0.44-31.25 μM. The binding affinities for the most active compounds 33 and 74 were estimated through molecular dynamics simulations and MM-PBSA analyses.
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Intra-operative 3D imaging system for robot-assisted fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-01-01
Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).
Validation of a Parametric Approach for 3d Fortification Modelling: Application to Scale Models
NASA Astrophysics Data System (ADS)
Jacquot, K.; Chevrier, C.; Halin, G.
2013-02-01
Parametric modelling approach applied to cultural heritage virtual representation is a field of research explored for years since it can address many limitations of digitising tools. For example, essential historical sources for fortification virtual reconstructions like plans-reliefs have several shortcomings when they are scanned. To overcome those problems, knowledge based-modelling can be used: knowledge models based on the analysis of theoretical literature of a specific domain such as bastioned fortification treatises can be the cornerstone of the creation of a parametric library of fortification components. Implemented in Grasshopper, these components are manually adjusted on the data available (i.e. 3D surveys of plans-reliefs or scanned maps). Most of the fortification area is now modelled and the question of accuracy assessment is raised. A specific method is used to evaluate the accuracy of the parametric components. The results of the assessment process will allow us to validate the parametric approach. The automation of the adjustment process can finally be planned. The virtual model of fortification is part of a larger project aimed at valorising and diffusing a very unique cultural heritage item: the collection of plans-reliefs. As such, knowledge models are precious assets when automation and semantic enhancements will be considered.
G2H--graphics-to-haptic virtual environment development tool for PC's.
Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L
2000-01-01
For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.
NASA Astrophysics Data System (ADS)
de Paor, D. G.
2009-12-01
Virtual Field Trips have been around almost as long as the Worldwide Web itself yet virtual explorers do not generally return to their desktops with folders full of virtual hand specimens. Collection of real specimens on fields trips for later analysis in the lab (or at least in the pub) has been an important part of classical field geoscience education and research for generations but concern for the landscape and for preservation of key outcrops from wanton destruction has lead to many restrictions. One of the author’s favorite outcrops was recently vandalized presumably by a geologist who felt the need to bash some of the world’s most spectacular buckle folds with a rock sledge. It is not surprising, therefore, that geologists sometimes leave fragile localities out of field trip itineraries. Once analyzed, most specimens repose in drawers or bins, never to be seen again. Some end up in teaching collections but recent pedagogical research shows that undergraduate students have difficulty relating specimens both to their collection location and ultimate provenance in the lithosphere. Virtual specimens can be created using 3D modeling software and imported into virtual globes such as Google Earth (GE) where, they may be linked to virtual field trip stops or restored to their source localities on the paleo-globe. Sensitive localities may be protected by placemark approximation. The GE application program interface (API) has a distinct advantage over the stand-alone GE application when it comes to viewing and manipulating virtual specimens. When instances of the virtual globe are embedded in web pages using the GE plug-in, Collada models of specimens can be manipulated with javascript controls residing in the enclosing HTML, permitting specimens to be magnified, rotated in 3D, and sliced. Associated analytical data may be linked into javascript and localities for comparison at various points on the globe referenced by ‘fetching’ KML. Virtual specimens open up new possibilities for distance learning, where design of effective lab exercises has long been an issue, and they permit independent evaluation of published field research by reviewers who do not have access to the physical field area. Although their creation can be labor intensive, the benefits of virtual specimens for education and research are potentially great. Interactive 3D Specimen of Sierra Granodiorite at Outcrop Location
Dental impressions using 3D digital scanners: virtual becomes reality.
Birnbaum, Nathan S; Aaronson, Heidi B
2008-10-01
The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.
Dark Energy and Dark Matter as w = -1 Virtual Particles and the World Hologram Model
NASA Astrophysics Data System (ADS)
Sarfatti, Jack
2011-04-01
The elementary physics battle-tested principles of Lorentz invariance, Einstein equivalence principle and the boson commutation and fermion anti-commutation rules of quantum field theory explain gravitationally repulsive dark energy as virtual bosons and gravitationally attractive dark matter as virtual fermion-antifermion pairs. The small dark energy density in our past light cone is the reciprocal entropy-area of our future light cone's 2D future event horizon in a Novikov consistent loop in time in our accelerating universe. Yakir Aharonov's "back-from-the-future" post-selected final boundary condition is set at our observer-dependent future horizon that also explains why the irreversible thermodynamic arrow of time of is aligned with the accelerating dark energy expansion of the bulk 3D space interior to our future 2D horizon surrounding it as the hologram screen. Seth Lloyd has argued that all 2D horizon surrounding surfaces are pixelated quantum computers projecting interior bulk 3D quanta of volume (Planck area)Sqrt(area of future horizon) as their hologram images in 1-1 correspondence.
NASA Astrophysics Data System (ADS)
Yoon, Jayoung; Kim, Gerard J.
2003-04-01
Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.
Virtual Australia and New Zealand (VANZ): Creating a piece of Digital Earth
NASA Astrophysics Data System (ADS)
Haines, M.
2014-02-01
VANZ is an Initiative of a wide group of research, government, industry, technology and legal stakeholders in Australia and New Zealand. Its purpose is to broker development of the 'Authorised Virtual World' that brings together 3D Spatial and Building Information Modelling within a proposed new Legal Framework. The aim is to create an 'authoritative' and 'enduring' 3D model of both the 'physical attributes', and 'legal entitlements' relating to every property. This 'authorised virtual world' would be used in all 'property-related' activities - to deliver better, quicker and cheaper outcomes. It would also be used as the context for serious games and to model dynamic processes within the built environment, as well as for emergency response and disaster recovery. Productivity savings across Australia have been estimated at 5 billion pa for the design and construct phases alone. The problem for owners, bankers, insurers, architects, engineers and construction companies, and others, is that they require access to 'authoritative' and detailed 3D data for their own purposes, that must also be securely shared with others up and down the 'property' chain, and over time. All parties also need to know what are the rights, responsibilities and restrictions applying to the data, as well as to the land and buildings that it models. VANZ proposes the creation of a network of Data Banks, to hold the 'authoritative data set', 'in perpetuity', along with the associated software and virtual hardware used to model it. Under the proposal, rights of access to the 'authoritative data' will mirror each person's rights in the property that the data models. As more and more buildings are modelled (inside and out), privacy, security and liability become issues of paramount importance. This paper offers a way for the global community to address these issues. It is targeted at all who have an interest in the practical implementation of Digital Earth for the built environment - including new business opportunities worth billions.
Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies
ERIC Educational Resources Information Center
Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis
2009-01-01
We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…
Interactive voxel graphics in virtual reality
NASA Astrophysics Data System (ADS)
Brody, Bill; Chappell, Glenn G.; Hartman, Chris
2002-06-01
Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.
3D Flow visualization in virtual reality
NASA Astrophysics Data System (ADS)
Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa
2017-11-01
By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.
3D Boolean operations in virtual surgical planning.
Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun
2017-10-01
Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.
Virtual Application of Darul Arif Palace from Serdang Sultanate using Virtual Reality
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Annisa, T.; Rahmat, R. F.; Muchtar, M. A.
2017-01-01
Serdang Sultanate is one of Malay Sultanate in Sumatera Utara. In the 18th century, many Malay Aristocrats have developed in Sumatera Utara. Social revolution has happened in 1946, many sultanates were overthrown and member of PKI (Communist Party of Indonesia) did mass killing on members of the sultanate families. As the results of this incident, many cultural and historical heritage destroyed. The integration of heritage preservation and the digital technology has become recent trend. The digital technology is not only able to record, preserve detailed documents and information of heritage completely, but also effectively bring the value-added. In this research, polygonal modelling techniques from 3D modelling technology is used to reconstruct Darul Arif Palace of Serdang Sultanate. After modelling the palace, it will be combined with virtual reality technology to allow user to explore the palace and the environment around the palace. Virtual technology is simulation of real objects in virtual world. The results in this research is that virtual reality application can run using Head-Mounted Display.
NASA Astrophysics Data System (ADS)
Fu, Ying; Sun, Yi-Na; Yi, Ke-Han; Li, Ming-Qiang; Cao, Hai-Feng; Li, Jia-Zhong; Ye, Fei
2018-02-01
4-Hydroxyphenylpyruvate dioxygenase (EC 1.13.11.27, HPPD) is a potent new bleaching herbicide target. Therefore, in silico structure-based virtual screening was performed in order to speed up the identification of promising HPPD inhibitors. In this study, an integrated virtual screening protocol by combining 3D-pharmacophore model, molecular docking and molecular dynamics (MD) simulation was established to find novel HPPD inhibitors from four commercial databases. 3D-pharmacophore Hypo1 model was applied to efficiently narrow potential hits. The hit compounds were subsequently submitted to molecular docking studies, showing four compounds as potent inhibitor with the mechanism of the Fe(II) coordination and interaction with Phe360, Phe403 and Phe398. MD result demonstrated that nonpolar term of compound 3881 made great contributions to binding affinities. It showed an IC50 being 2.49 µM against AtHPPD in vitro. The results provided useful information for developing novel HPPD inhibitors, leading to further understanding of the interaction mechanism of HPPD inhibitors.
3D Elevation Program—Virtual USA in 3D
Lukas, Vicki; Stoker, J.M.
2016-04-14
The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.
Advances in edge-diffraction modeling for virtual-acoustic simulations
NASA Astrophysics Data System (ADS)
Calamia, Paul Thomas
In recent years there has been growing interest in modeling sound propagation in complex, three-dimensional (3D) virtual environments. With diverse applications for the military, the gaming industry, psychoacoustics researchers, architectural acousticians, and others, advances in computing power and 3D audio-rendering techniques have driven research and development aimed at closing the gap between the auralization and visualization of virtual spaces. To this end, this thesis focuses on improving the physical and perceptual realism of sound-field simulations in virtual environments through advances in edge-diffraction modeling. To model sound propagation in virtual environments, acoustical simulation tools commonly rely on geometrical-acoustics (GA) techniques that assume asymptotically high frequencies, large flat surfaces, and infinitely thin ray-like propagation paths. Such techniques can be augmented with diffraction modeling to compensate for the effect of surface size on the strength and directivity of a reflection, to allow for propagation around obstacles and into shadow zones, and to maintain soundfield continuity across reflection and shadow boundaries. Using a time-domain, line-integral formulation of the Biot-Tolstoy-Medwin (BTM) diffraction expression, this thesis explores various aspects of diffraction calculations for virtual-acoustic simulations. Specifically, we first analyze the periodic singularity of the BTM integrand and describe the relationship between the singularities and higher-order reflections within wedges with open angle less than 180°. Coupled with analytical approximations for the BTM expression, this analysis allows for accurate numerical computations and a continuous sound field in the vicinity of an arbitrary wedge geometry insonified by a point source. Second, we describe an edge-subdivision strategy that allows for fast diffraction calculations with low error relative to a numerically more accurate solution. Third, to address the considerable increase in propagation paths due to diffraction, we describe a simple procedure for identifying and culling insignificant diffraction components during a virtual-acoustic simulation. Finally, we present a novel method to find GA components using diffraction parameters that ensures continuity at reflection and shadow boundaries.
SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport
Provost, Alden M.
2002-01-01
SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.
Fischer, Gerrit; Stadie, Axel; Schwandt, Eike; Gawehn, Joachim; Boor, Stephan; Marx, Juergen; Oertel, Joachim
2009-05-01
The aim of the authors in this study was to introduce a minimally invasive superficial temporal artery to middle cerebral artery (STA-MCA) bypass surgery by the preselection of appropriate donor and recipient branches in a 3D virtual reality setting based on 3-T MR angiography data. An STA-MCA anastomosis was performed in each of 5 patients. Before surgery, 3-T MR imaging was performed with 3D magnetization-prepared rapid acquisition gradient echo sequences, and a high-resolution CT 3D dataset was obtained. Image fusion and the construction of a 3D virtual reality model of each patient were completed. In the 3D virtual reality setting, the skin surface, skull surface, and extra- and intracranial arteries as well as the cortical brain surface could be displayed in detail. The surgical approach was successfully visualized in virtual reality. The anatomical relationship of structures of interest could be evaluated based on different values of translucency in all cases. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be calculated with high accuracy preoperatively and determined as the center point of the following minicraniotomy. Localization of the craniotomy and the skin incision on top of the STA branch was calculated with the system, and these data were transferred onto the patient's skin before surgery. In all cases the preselected arteries could be found intraoperatively in exact agreement with the preoperative planning data. Successful extracranial-intracranial bypass surgery was achieved without stereotactic neuronavigation via a preselected minimally invasive approach in all cases. Subsequent enlargement of the craniotomy was not necessary. Perioperative complications were not observed. All bypasses remained patent on follow-up. With the application of a 3D virtual reality planning system, the extent of skin incision and tissue trauma as well as the size of the bone flap was minimal. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be preoperatively determined with high accuracy so that the STA-MCA bypass could be safely and effectively performed through an optimally located minicraniotomy with a mean diameter of 22 mm without the need for stereotactic guidance.
Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder
NASA Technical Reports Server (NTRS)
Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.
1999-01-01
We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
ERIC Educational Resources Information Center
Coffey, Amy Jo; Kamhawi, Rasha; Fishwick, Paul; Henderson, Julie
2017-01-01
Relatively few studies have empirically tested computer-based immersive virtual environments' efficacy in teaching or enhancing pro-social attitudes, such as intercultural sensitivity. This channel study experiment was conducted (N = 159) to compare what effects, if any, an immersive 3D virtual environment would have upon subjects' intercultural…
Attitude and Self-Efficacy Change: English Language Learning in Virtual Worlds
ERIC Educational Resources Information Center
Zheng, Dongping; Young, Michael F.; Brewer, Robert A.; Wagner, Manuela
2009-01-01
This study explored affective factors in learning English as a foreign language in a 3D game-like virtual world, Quest Atlantis (QA). Through the use of communication tools (e.g., chat, bulletin board, telegrams, and email), 3D avatars, and 2D webpage navigation tools in virtual space, nonnative English speakers (NNES) co-solved online…
Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B
2012-07-01
Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Crossingham, Jodi L; Jenkinson, Jodie; Woolridge, Nick; Gallinger, Steven; Tait, Gordon A; Moulton, Carol-Anne E
2009-01-01
Background: Given the increasing number of indications for liver surgery and the growing complexity of operations, many trainees in surgical, imaging and related subspecialties require a good working knowledge of the complex intrahepatic anatomy. Computed tomography (CT), the most commonly used liver imaging modality, enhances our understanding of liver anatomy, but comprises a two-dimensional (2D) representation of a complex 3D organ. It is challenging for trainees to acquire the necessary skills for converting these 2D images into 3D mental reconstructions because learning opportunities are limited and internal hepatic anatomy is complicated, asymmetrical and variable. We have created a website that uses interactive 3D models of the liver to assist trainees in understanding the complex spatial anatomy of the liver and to help them create a 3D mental interpretation of this anatomy when viewing CT scans. Methods: Computed tomography scans were imported into DICOM imaging software (OsiriX™) to obtain 3D surface renderings of the liver and its internal structures. Using these 3D renderings as a reference, 3D models of the liver surface and the intrahepatic structures, portal veins, hepatic veins, hepatic arteries and the biliary system were created using 3D modelling software (Cinema 4D™). Results: Using current best practices for creating multimedia tools, a unique, freely available, online learning resource has been developed, entitled Visual Interactive Resource for Teaching, Understanding And Learning Liver Anatomy (VIRTUAL Liver) (http://pie.med.utoronto.ca/VLiver). This website uses interactive 3D models to provide trainees with a constructive resource for learning common liver anatomy and liver segmentation, and facilitates the development of the skills required to mentally reconstruct a 3D version of this anatomy from 2D CT scans. Discussion: Although the intended audience for VIRTUAL Liver consists of residents in various medical and surgical specialties, the website will also be useful for other health care professionals (i.e. radiologists, nurses, hepatologists, radiation oncologists, family doctors) and educators because it provides a comprehensive resource for teaching liver anatomy. PMID:19816618
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
[Construction of information management-based virtual forest landscape and its application].
Chen, Chongcheng; Tang, Liyu; Quan, Bing; Li, Jianwei; Shi, Song
2005-11-01
Based on the analysis of the contents and technical characteristics of different scale forest visualization modeling, this paper brought forward the principles and technical systems of constructing an information management-based virtual forest landscape. With the combination of process modeling and tree geometric structure description, a software method of interactively and parameterized tree modeling was developed, and the corresponding renderings and geometrical elements simplification algorithms were delineated to speed up rendering run-timely. As a pilot study, the geometrical model bases associated with the typical tree categories in Zhangpu County of Fujian Province, southeast China were established as template files. A Virtual Forest Management System prototype was developed with GIS component (ArcObject), OpenGL graphics environment, and Visual C++ language, based on forest inventory and remote sensing data. The prototype could be used for roaming between 2D and 3D, information query and analysis, and virtual and interactive forest growth simulation, and its reality and accuracy could meet the needs of forest resource management. Some typical interfaces of the system and the illustrative scene cross-sections of simulated masson pine growth under conditions of competition and thinning were listed.
NASA Astrophysics Data System (ADS)
Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.
2016-12-01
Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.
Palestro, Pablo; Enrique, Nicolas; Goicoechea, Sofia; Villalba, María Luisa; Sabatier, Laureano Leonel; Martin, Pedro; Milesi, Veronica; Bruno-Blanch, Luis E; Gavernet, Luciana
2018-06-05
The purpose of this investigation is to contribute to the development of new anticonvulsant drugs to treat patients with refractory epilepsy. We applied a virtual screening protocol that involved the search into molecular databases of new compounds and known drugs to find small molecules that interact with the open conformation of the Nav1.2 pore. As the 3D structure of human Nav1.2 is not available, we first assembled 3D models of the target, in closed and open conformations. After the virtual screening, the resulting candidates were submitted to a second virtual filter, to find compounds with better chances of being effective for the treatment of P-glycoprotein (P-gp) mediated resistant epilepsy. Again, we built a model of the 3D structure of human P-gp and we validated the docking methodology selected to propose the best candidates, which were experimentally tested on Nav1.2 channels by patch clamp techniques and in vivo by MES-test. Patch clamp studies allowed us to corroborate that our candidates, drugs used for the treatment of other pathologies like Ciprofloxacin, Losartan and Valsartan, exhibit inhibitory effects on Nav1.2 channel activity. Additionally, a compound synthesized in our lab, N,N´-diphenethylsulfamide, interacts with the target and also triggers significant Na1.2 channel inhibitory action. Finally, in-vivo studies confirmed the anticonvulsant action of Valsartan, Ciprofloxacin and N.N´-diphenethylsulfamide.
3D virtual human atria: A computational platform for studying clinical atrial fibrillation
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-01-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria – 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to the mechanisms of the normal rhythm and AF arrhythmogenesis are investigated and discussed. The 3D model of the atria itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and arrhythmogenesis. Results of such simulations can be directly compared with experimental electrophysiological and endocardial mapping data, as well as clinical ECG recordings. More importantly, the virtual human atria can provide validated means for directly dissecting 3D excitation propagation processes within the atrial walls from an in vivo whole heart, which are beyond the current technical capabilities of experimental or clinical set-ups. PMID:21762716
NASA Astrophysics Data System (ADS)
Canciani, M.; Conigliaro, E.; Del Grasso, M.; Papalini, P.; Saccone, M.
2016-06-01
The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented "Augment") it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.
Real-time 3D image reconstruction guidance in liver resection surgery.
Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-04-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.
Three-Dimensional Modeling of Fracture Clusters in Geothermal Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghassemi, Ahmad
The objective of this is to develop a 3-D numerical model for simulating mode I, II, and III (tensile, shear, and out-of-plane) propagation of multiple fractures and fracture clusters to accurately predict geothermal reservoir stimulation using the virtual multi-dimensional internal bond (VMIB). Effective development of enhanced geothermal systems can significantly benefit from improved modeling of hydraulic fracturing. In geothermal reservoirs, where the temperature can reach or exceed 350oC, thermal and poro-mechanical processes play an important role in fracture initiation and propagation. In this project hydraulic fracturing of hot subsurface rock mass will be numerically modeled by extending the virtual multiplemore » internal bond theory and implementing it in a finite element code, WARP3D, a three-dimensional finite element code for solid mechanics. The new constitutive model along with the poro-thermoelastic computational algorithms will allow modeling the initiation and propagation of clusters of fractures, and extension of pre-existing fractures. The work will enable the industry to realistically model stimulation of geothermal reservoirs. The project addresses the Geothermal Technologies Office objective of accurately predicting geothermal reservoir stimulation (GTO technology priority item). The project goal will be attained by: (i) development of the VMIB method for application to 3D analysis of fracture clusters; (ii) development of poro- and thermoelastic material sub-routines for use in 3D finite element code WARP3D; (iii) implementation of VMIB and the new material routines in WARP3D to enable simulation of clusters of fractures while accounting for the effects of the pore pressure, thermal stress and inelastic deformation; (iv) simulation of 3D fracture propagation and coalescence and formation of clusters, and comparison with laboratory compression tests; and (v) application of the model to interpretation of injection experiments (planned by our industrial partner) with reference to the impact of the variations in injection rate and temperature, rock properties, and in-situ stress.« less
Structure-Based Virtual Screening for Dopamine D2 Receptor Ligands as Potential Antipsychotics.
Kaczor, Agnieszka A; Silva, Andrea G; Loza, María I; Kolb, Peter; Castro, Marián; Poso, Antti
2016-04-05
Structure-based virtual screening using a D2 receptor homology model was performed to identify dopamine D2 receptor ligands as potential antipsychotics. From screening a library of 6.5 million compounds, 21 were selected and were subjected to experimental validation. From these 21 compounds tested, ten D2 ligands were identified (47.6% success rate, among them D2 receptor antagonists, as expected) that have additional affinity for other receptors tested, in particular 5-HT2A receptors. The affinity (Ki values) of the compounds ranged from 58 nm to about 24 μM. Similarity and fragment analysis indicated a significant degree of structural novelty among the identified compounds. We found one D2 receptor antagonist that did not have a protonatable nitrogen atom, which is a key structural element of the classical D2 pharmacophore model necessary for interaction with the conserved Asp(3.32) residue. This compound exhibited greater than 20-fold binding selectivity for the D2 receptor over the D3 receptor. We provide additional evidence that the amide hydrogen atom of this compound forms a hydrogen bond with Asp(3.32), as determined by tests of its derivatives that cannot maintain this interaction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2015-07-01
In the field of orthodontic planning, the creation of a complete digital dental model to simulate and predict treatments is of utmost importance. Nowadays, orthodontists use panoramic radiographs (PAN) and dental crown representations obtained by optical scanning. However, these data do not contain any 3D information regarding tooth root geometries. A reliable orthodontic treatment should instead take into account entire geometrical models of dental shapes in order to better predict tooth movements. This paper presents a methodology to create complete 3D patient dental anatomies by combining digital mouth models and panoramic radiographs. The modeling process is based on using crown surfaces, reconstructed by optical scanning, and root geometries, obtained by adapting anatomical CAD templates over patient specific information extracted from radiographic data. The radiographic process is virtually replicated on crown digital geometries through the Discrete Radon Transform (DRT). The resulting virtual PAN image is used to integrate the actual radiographic data and the digital mouth model. This procedure provides the root references on the 3D digital crown models, which guide a shape adjustment of the dental CAD templates. The entire geometrical models are finally created by merging dental crowns, captured by optical scanning, and root geometries, obtained from the CAD templates. Copyright © 2015 Elsevier Ltd. All rights reserved.
New database for improving virtual system “body-dress”
NASA Astrophysics Data System (ADS)
Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.
2017-10-01
The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.
The Application of Modeling and Simulation to the Behavioral Deficit of Autism
NASA Technical Reports Server (NTRS)
Anton, John J.
2010-01-01
This abstract describes a research effort to apply technological advances in virtual reality simulation and computer-based games to create behavioral modification programs for individuals with Autism Spectrum Disorder (ASD). The research investigates virtual social skills training within a 3D game environment to diminish the impact of ASD social impairments and to increase learning capacity for optimal intellectual capability. Individuals with autism will encounter prototypical social contexts via computer interface and will interact with 3D avatars with predefined roles within a game-like environment. Incremental learning objectives will combine to form a collaborative social environment. A secondary goal of the effort is to begin the research and development of virtual reality exercises aimed at triggering the release of neurotransmitters to promote critical aspects of synaptic maturation at an early age to change the course of the disease.
Huetteroth, Wolf; el Jundi, Basil; el Jundi, Sirri; Schachtner, Joachim
2009-01-01
During metamorphosis, the transition from the larva to the adult, the insect brain undergoes considerable remodeling: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1. PMID:20339481
Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim
2010-01-01
DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.
[Registration technology for mandibular angle osteotomy based on augmented reality].
Zhu, Ming; Chai, Gang; Zhang, Yan; Ma, Xiao-Fei; Yu, Zhe-Yuan; Zhu, Yi-Jia
2010-12-01
To establish an effective path to register the operative plan to the real model of mandible made by rapid prototyping (RP) technology. Computerize tomography (CT) was performed on 20 patients to create 3D images, and computer aided operation planning information can be merged with the 3D images. Then dental cast was used to fix the signal which can be recognized by the software. The dental cast was transformed to 3D data with a laser scanner and a programmer that run on a personal computer named Rapidform matching the dental cast and the mandible image to generate the virtual image. Then the registration was achieved by video monitoring system. By using this technology, the virtual image of mandible and the cutting planes both can overlay the real model of mandible made by RP. This study found an effective way for registration by using dental cast, and this way might be a powerful option for the registration of augmented reality. Supported by Program for Innovation Research Team of Shanghai Municipal Education Commission.
ERIC Educational Resources Information Center
Matsuda, Hiroshi; Shindo, Yoshiaki
2006-01-01
The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…
NASA Astrophysics Data System (ADS)
Aksenov, A. A.; Danilishin, A. M.; Dubenko, A. M.; Kozhukov, Y. V.
2017-08-01
Design modernization of the centrifugal compressor stage test bench with three dimensional impeller blades was carried out for the possibility of holding a series of experimental studies of different 3D impeller models. The studies relates to the problem of joint work of the impeller and the stationary channels of the housing when carrying out works on modernization with the aim of improving the parameters of the volumetric capacity or pressure in the presence of design constraints. The object of study is the experimental single end centrifugal compressor stage with the 3D impeller. Compressor stage consists of the 3D impeller, vaneless diffuser (VLD), outlet collector - folded side scroll and downstream pipe. The drive is a DC motor 75 kW. The increase gear (multiplier) was set between the compressor and DC motor, gear ratio is i = 9.8. To obtain the characteristics of the compressor and the flow area the following values were measured: total pressure, static pressure, direction (angles) of the stream in different cross sections. Additional pneumometric probes on the front wall of the VLD of the test bench have been installed. Total pressure probes and foster holes for the measurement of total and static pressure by the new drainage scheme. This allowed carrying out full experimental studies for two elements of centrifugal compressor stage. After the experimental tests the comprehensive information about the performance of model stage were obtained. Was measured geometric parameters and the constructed virtual model of the experimental bench flow part with the help of Creo Parametric 3.0 and ANSYS v. 16.2. Conducted CFD calculations and verification with experimental data. Identifies the steps for further experimental and virtual works.
Workflow of CAD / CAM Scoliosis Brace Adjustment in Preparation Using 3D Printing.
Weiss, Hans-Rudolf; Tournavitis, Nicos; Nan, Xiaofeng; Borysov, Maksym; Paul, Lothar
2017-01-01
High correction bracing is the most effective conservative treatment for patients with scoliosis during growth. Still today braces for the treatment of scoliosis are made by casting patients while computer aided design (CAD) and computer aided manufacturing (CAM) is available with all possibilities to standardize pattern specific brace treatment and improve wearing comfort. CAD / CAM brace production mainly relies on carving a polyurethane foam model which is the basis for vacuuming a polyethylene (PE) or polypropylene (PP) brace. Purpose of this short communication is to describe the workflow currently used and to outline future requirements with respect to 3D printing technology. Description of the steps of virtual brace adjustment as available today are content of this paper as well as an outline of the great potential there is for the future 3D printing technology. For 3D printing of scoliosis braces it is necessary to establish easy to use software plug-ins in order to allow adding 3D printing technology to the current workflow of virtual CAD / CAM brace adjustment. Textures and structures can be added to the brace models at certain well defined locations offering the potential of more wearing comfort without losing in-brace correction. Advances have to be made in the field of CAD / CAM software tools with respect to design and generation of individually structured brace models based on currently well established and standardized scoliosis brace libraries.
ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work
NASA Astrophysics Data System (ADS)
Knopf, George K.; Nelson, Andrew J.
2004-10-01
Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.
Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-06-01
To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.
ERIC Educational Resources Information Center
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
Effects of Presence, Copresence, and Flow on Learning Outcomes in 3D Learning Spaces
ERIC Educational Resources Information Center
Hassell, Martin D.; Goyal, Sandeep; Limayem, Moez; Boughzala, Imed
2012-01-01
The level of satisfaction and effectiveness of 3D virtual learning environments were examined. Additionally, 3D virtual learning environments were compared with face-to-face learning environments. Students that experienced higher levels of flow and presence also experienced more satisfaction but not necessarily more effectiveness with 3D virtual…
Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study
NASA Astrophysics Data System (ADS)
Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.
2016-06-01
Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.
Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-01-01
Abstract Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna. As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments. PMID:29491963
Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter
2017-02-01
Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna . As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments.
[Development of a software for 3D virtual phantom design].
Zou, Lian; Xie, Zhao; Wu, Qi
2014-02-01
In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.
Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques
2017-07-01
Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Multicellular Models of Morphogenesis
EPA’s Virtual Embryo project (v-Embryo™), in collaboration with developers of CompuCell3D, aims to create computer models of morphogenesis that can be used to address the effects of chemical perturbation on embryo development at the cellular level. Such computational (in silico) ...
Virtual reality: Avatars in human spaceflight training
NASA Astrophysics Data System (ADS)
Osterlund, Jeffrey; Lawrence, Brad
2012-02-01
With the advancements in high spatial and temporal resolution graphics, along with advancements in 3D display capabilities to model, simulate, and analyze human-to-machine interfaces and interactions, the world of virtual environments is being used to develop everything from gaming, movie special affects and animations to the design of automobiles. The use of multiple object motion capture technology and digital human tools in aerospace has demonstrated to be a more cost effective alternative to the cost of physical prototypes, provides a more efficient, flexible and responsive environment to changes in the design and training, and provides early human factors considerations concerning the operation of a complex launch vehicle or spacecraft. United Space Alliance (USA) has deployed this technique and tool under Research and Development (R&D) activities on both spacecraft assembly and ground processing operations design and training on the Orion Crew Module. USA utilizes specialized products that were chosen based on functionality, including software and fixed based hardware (e.g., infrared and visible red cameras), along with cyber gloves to ensure fine motor dexterity of the hands. The key findings of the R&D were: mock-ups should be built to not obstruct cameras from markers being tracked; a mock-up toolkit be assembled to facilitate dynamic design changes; markers should be placed in accurate positions on humans and flight hardware to help with tracking; 3D models used in the virtual environment be striped of non-essential data; high computational capable workstations are required to handle the large model data sets; and Technology Interchange Meetings with vendors and other industries also utilizing virtual reality applications need to occur on a continual basis enabling USA to maintain its leading edge within this technology. Parameters of interest and benefit in human spaceflight simulation training that utilizes virtual reality technologies are to familiarize and assess operational processes, allow the ability to train virtually, experiment with "what if" scenarios, and expedite immediate changes to validate the design implementation are all parameters of interest in human spaceflight. Training benefits encompass providing 3D animation for post-training assessment, placement of avatars within 3D replicated work environments in assembling or processing hardware, offering various viewpoints of processes viewed and assessed giving the evaluators the ability to assess task feasibility and identify potential support equipment needs; and provide human factors determinations, such as reach, visibility, and accessibility. Multiple object motion capture technology provides an effective tool to train and assess ergonomic risks, simulations for determination of negative interactions between technicians and their proposed workspaces, and evaluation of spaceflight systems prior to, and as part of, the design process to contain costs and reduce schedule delays.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
ERIC Educational Resources Information Center
Dickey, Michele D.
2005-01-01
Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…
NASA Astrophysics Data System (ADS)
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
Lin, Wei-Shao; Harris, Bryan T; Phasuk, Kamolphob; Llop, Daniel R; Morton, Dean
2018-02-01
This clinical report describes a digital workflow using the virtual smile design approach augmented with a static 3-dimensional (3D) virtual patient with photorealistic appearance to restore maxillary central incisors by using computer-aided design and computer-aided manufacturing (CAD-CAM) monolithic lithium disilicate ceramic veneers. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan
2017-03-01
The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (< 4% difference) with p >.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.
Yunus, Mahira
2012-11-01
To study the use of helical computed tomography 2-D and 3-D images, and virtual endoscopy in the evaluation of airway disease in neonates, infants and children and its value in lesion detection, characterisation and extension. Conducted at Al-Noor Hospital, Makkah, Saudi Arabia, from January 1 to June 30, 2006, the study comprised of 40 patients with strider, having various causes of airway obstruction. They were examined by helical CT scan with 2-D and 3-D reconstructions and virtual endoscopy. The level and characterisation of lesions were carried out and results were compared with actual endoscopic findings. Conventional endoscopy was chosen as the gold standard, and the evaluation of endoscopy was done in terms of sensitivity and specificity of the procedure. For statistical purposes, SPSS version 10 was used. All CT methods detected airway stenosis or obstruction. Accuracy was 98% (n=40) for virtual endoscopy, 96% (n=48) for 3-D external rendering, 90% (n=45) for multiplanar reconstructions and 86% (n=43) for axial images. Comparing the results of 3-D internal and external volume rendering images with conventional endoscopy for detection and grading of stenosis were closer than with 2-D minimum intensity multiplanar reconstruction and axial CT slices. Even high-grade stenosis could be evaluated with virtual endoscope through which conventional endoscope cannot be passed. A case of 4-year-old patient with tracheomalacia could not be diagnosed by helical CT scan and virtual bronchoscopy which was diagriosed on conventional endoscopy and needed CT scan in inspiration and expiration. Virtual endoscopy [VE] enabled better assessment of stenosis compared to the reading of 3-D external rendering, 2-D multiplanar reconstruction [MPR] or axial slices. It can replace conventional endoscopy in the assessment of airway disease without any additional risk.
Buck, Ursula; Naether, Silvio; Braun, Marcel; Thali, Michael
2008-09-18
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Chin, Shih-Jan; Wilde, Frank; Neuhaus, Michael; Schramm, Alexander; Gellrich, Nils-Claudius; Rana, Majeed
2017-12-01
The benefit of computer-assisted planning in orthognathic surgery has been extensively documented over the last decade. This study aims to evaluate the accuracy of a virtual orthognathic surgical plan by a novel three dimensional (3D) analysis method. Ten patients who required orthognathic surgery were included in this study. A virtual surgical plan was achieved by the combination of a 3D skull model acquired from computed tomography (CT) and surface scanning of the upper and lower dental arch respectively and final occlusal position. Osteotomies and movement of maxilla and mandible were simulated by Dolphin Imaging 11.8 Premium ® (Dolphin Imaging and Management Solutions, Chatsworth, CA). The surgical plan was transferred to surgical splints fabricated by means of Computer Aided Design/Computer Aided Manufacturing (CAD/CAM). Differences of three dimensional measurements between the virtual surgical plan and postoperative results were evaluated. The results from all parameters showed that the virtual surgical plans were successfully transferred by the assistance of CAD/CAM fabricated surgical splint. Wilcoxon's signed rank test showed that no statistically significant deviation between surgical plan and post-operational result could be detected. However, deviation of angle U1 axis-HP and distance of A-CP could not fulfill the clinical success criteria. Virtual surgical planning and CAD/CAM fabricated surgical splint are proven to facilitate treatment planning and offer an accurate surgical result in orthognathic surgery. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Virtual Surgery for Conduit Reconstruction of the Right Ventricular Outflow Tract.
Ong, Chin Siang; Loke, Yue-Hin; Opfermann, Justin; Olivieri, Laura; Vricella, Luca; Krieger, Axel; Hibino, Narutoshi
2017-05-01
Virtual surgery involves the planning and simulation of surgical reconstruction using three-dimensional (3D) modeling based upon individual patient data, augmented by simulation of planned surgical alterations including implantation of devices or grafts. Here we describe a case in which virtual cardiac surgery aided us in determining the optimal conduit size to use for the reconstruction of the right ventricular outflow tract. The patient is a young adolescent male with a history of tetralogy of Fallot with pulmonary atresia, requiring right ventricle-to-pulmonary artery (RV-PA) conduit replacement. Utilizing preoperative magnetic resonance imaging data, virtual surgery was undertaken to construct his heart in 3D and to simulate the implantation of three different sizes of RV-PA conduit (18, 20, and 22 mm). Virtual cardiac surgery allowed us to predict the ability to implant a conduit of a size that would likely remain adequate in the face of continued somatic growth and also allow for the possibility of transcatheter pulmonary valve implantation at some time in the future. Subsequently, the patient underwent uneventful conduit change surgery with implantation of a 22-mm Hancock valved conduit. As predicted, the intrathoracic space was sufficient to accommodate the relatively large conduit size without geometric distortion or sternal compression. Virtual cardiac surgery gives surgeons the ability to simulate the implantation of prostheses of different sizes in relation to the dimensions of a specific patient's own heart and thoracic cavity in 3D prior to surgery. This can be very helpful in predicting optimal conduit size, determining appropriate timing of surgery, and patient education.
3D imaging, 3D printing and 3D virtual planning in endodontics.
Shah, Pratik; Chong, B S
2018-03-01
The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.
Aubry, S; Pousse, A; Sarliève, P; Laborie, L; Delabrousse, E; Kastler, B
2006-11-01
To model vertebrae in 3D to improve radioanatomic knowledge of the spine with the vascular and nerve environment and simulate CT-guided interventions. Vertebra acquisitions were made with multidetector CT. We developed segmentation software and specific viewer software using the Delphi programming environment. This segmentation software makes it possible to model 3D high-resolution segments of vertebrae and their environment from multidetector CT acquisitions. Then the specific viewer software provides multiplanar reconstructions of the CT volume and the possibility to select different 3D objects of interest. This software package improves radiologists' radioanatomic knowledge through a new 3D anatomy presentation. Furthermore, the possibility of inserting virtual 3D objects in the volume can simulate CT-guided intervention. The first volumetric radioanatomic software has been born. Furthermore, it simulates CT-guided intervention and consequently has the potential to facilitate learning interventions using CT guidance.
NASA Astrophysics Data System (ADS)
Moussaoui, H.; Debayle, J.; Gavet, Y.; Delette, G.; Hubert, M.; Cloetens, P.; Laurencin, J.
2017-03-01
A strong correlation exists between the performance of Solid Oxide Cells (SOCs), working either in fuel cell or electrolysis mode, and their electrodes microstructure. However, the basic relationships between the three-dimensional characteristics of the microstructure and the electrode properties are not still precisely understood. Thus, several studies have been recently proposed in an attempt to improve the knowledge of such relations, which are essential before optimizing the microstructure, and hence, designing more efficient SOC electrodes. In that frame, an original model has been adapted to generate virtual 3D microstructures of typical SOCs electrodes. Both the oxygen electrode, which is made of porous LSCF, and the hydrogen electrodes, made of porous Ni-YSZ, have been studied. In this work, the synthetic microstructures are generated by the so-called 3D Gaussian `Random Field model'. The morphological representativeness of the virtual porous media have been validated on real 3D electrode microstructures of a commercial cell, obtained by X-ray nano-tomography at the European Synchrotron Radiation Facility (ESRF). This validation step includes the comparison of the morphological parameters like the phase covariance function and granulometry as well as the physical parameters like the `apparent tortuosity'. Finally, this validated tool will be used, in forthcoming studies, to identify the optimal microstructure of SOCs.
NASA Astrophysics Data System (ADS)
Yang, W. B.; Yen, Y. N.; Cheng, H. M.
2015-08-01
The integration of preservation of heritage and the digital technology is an important international trend in the 21st century. The digital technology not only is able to record and preserve detailed documents and information of heritage completely, but also brings the value-added features effectively. In this study, 3D laser scanning is used to perform the digitalized archives for the interior and exterior body work of the building which contains integration of 3D scanner technology, mobile scanning collaboration and multisystem reverse modeling and integration technology. The 3D model is built by combining with multi-media presentations and reversed modeling in real scale to perform the simulation of virtual reality (VR). With interactive teaching and presentation of augmented reality to perform the interaction technology to extend the continuously update in traditional architecture information. With the upgrade of the technology and value-added in digitalization, the cultural asset value can be experienced through 3D virtual reality which makes the information presentation from the traditional reading in the past toward user operation with sensory experience and keep exploring the possibilities and development of cultural asset preservation by using digital technology makes the presentation and learning of cultural asset information toward diversification.
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2016-08-01
The development of methods of computer aided design and engineering allows conducting virtual tests, among others concerning motion simulation of technical means. The paper presents a method of integrating an object in the form of a virtual model of a Stewart platform with an avatar of a vehicle moving in a virtual environment. The area of the problem includes issues related to the problem of fidelity of mapping the work of the analyzed technical mean. The main object of investigations is a 3D model of a Stewart platform, which is a subsystem of the simulator designated for driving learning for disabled persons. The analyzed model of the platform, prepared for motion simulation, was created in the “Motion Simulation” module of a CAD/CAE class system Siemens PLM NX. Whereas the virtual environment, in which the moves the avatar of the passenger car, was elaborated in a VR class system EON Studio. The element integrating both of the mentioned software environments is a developed application that reads information from the virtual reality (VR) concerning the current position of the car avatar. Then, basing on the accepted algorithm, it sends control signals to respective joints of the model of the Stewart platform (CAD).
How to Make a Virtual Landscape with Outcrops for Use in Geoscience Teaching
NASA Astrophysics Data System (ADS)
Houghton, J.; Gordon, C.; Craven, B.; Robinson, A.; Lloyd, G. E. E.; Morgan, D. J.
2016-12-01
We are using screen-based virtual reality landscapes to augment the teaching of basic geological field skills and to enhance 3D visualisation skills. Here we focus on the processes of creating these landscapes, both imagined and real, in the Unity 3D game engine. The virtual landscapes are terrains with embedded data for mapping exercises, or draped geological maps for understanding the 3D interaction of the geology with the topography. The nature of the landscapes built depends on the learning outcomes of the intended teaching exercise. For example, a simple model of two hills and a valley over which to drape a series of different geological maps can be used to enhance the understanding of the 3D interaction of the geology with the topography. A more complex topography reflecting the underlying geology can be used for geological mapping exercises. The process starts with a contour image or DEM, which needs to be converted into RAW files to be imported into Unity. Within Unity itself, there are a series of steps needed to create a world around the terrain (the setting of cameras, lighting, skyboxes etc) before the terrain can be painted with vegetation and populated with assets or before a splatmap of the geology can be added. We discuss how additional features such as a GPS unit or compass can be included. We are also working to create landscapes based on real localities, both in response to the demand for greater realism and to support students unable to access the field due to health or mobility issues. This includes adding 3D photogrammetric images of outcrops into the worlds. This process uses the open source/freeware tools VisualSFM and MeshLab to create files suitable to be imported into Unity. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.
Virtual Vents: A Microbathymetrical Survey of the Niua South Hydrothermal Field, NE Lau Basin, Tonga
NASA Astrophysics Data System (ADS)
Kwasnitschka, T.; Köser, K.; Duda, A.; Jamieson, J. W.; Boschen, R.; Gartman, A.; Hannington, M. D.; Funganitao, C.
2016-12-01
At a diameter of 200 m, the 1100 m deep Niua South hydrothermal field (NE Lau Basin) was studied in an interdisciplinary approach during the SOI funded Virtual Vents cruise in March of 2016. On the grounds of a previously generated 50 cm resolution AUV multi beam map, the projects backbone is formed by a fully color textured, 5 cm resolution photogrammetrical 3D model. Several hundred smaller and about 15 chimneys larger than 3 m were surveyed including their basal mounds and surrounding environment interconnecting to each other. This model was populated through exhaustive geological, biological and fluid sampling as well as continuous Eh measurements, forming the basis for highly detailed geological structural and biological studies resulting in 3D maps of the entire field. At a reasonable effort, such surveys form the basis for repetitive time series analysis and have the potential of a new standard in seafloor monitoring.
Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars
Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho
2015-01-01
In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629
Distance Learning for Students with Special Needs through 3D Virtual Learning
ERIC Educational Resources Information Center
Laffey, James M.; Stichter, Janine; Galyen, Krista
2014-01-01
iSocial is a 3D Virtual Learning Environment (3D VLE) to develop social competency for students who have been identified with High-Functioning Autism Spectrum Disorders. The motivation for developing a 3D VLE is to improve access to special needs curriculum for students who live in rural or small school districts. The paper first describes a…
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Design of virtual display and testing system for moving mass electromechanical actuator
NASA Astrophysics Data System (ADS)
Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng
2015-12-01
Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.
Development and comparison of projection and image space 3D nodule insertion techniques
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan
2016-04-01
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
Developing a Virtual Museum for the Ancient Wine Trade in Eastern Mediterranean
NASA Astrophysics Data System (ADS)
Kazanis, S.; Kontogianni, G.; Chliverou, R.; Georgopoulos, A.
2017-08-01
Digital technologies for representing cultural heritage assets of any size are already maturing. Technological progress has greatly enhanced the art of virtual representation and, as a consequence, it is all the more appealing to the general public and especially to younger generations. The game industry has played a significant role towards this end and has led to the development of edutainment applications. The digital workflow implemented for developing such an application is presented in this paper. A virtual museum has been designed and developed, with the intention to convey the history of trade in the Eastern Mediterranean area, focusing on the Aegean Sea and five productive cities-ports, during a period of more than 500 years. Image based modeling methodology was preferred to ensure accuracy and reliability. The setup in the museum environment, the difficulties encountered and the solutions adopted are discussed, while processing of the images and the production and finishing of the 3D models are described in detail. The virtual museum and edutainment application, MEDWINET, has been designed and developed with the intention to convey the essential information of the wine production and trade routes in the Eastern Mediterranean basin. The user is able to examine the 3D models of the amphorae, while learning about their production and use for trade during the centuries. The application has been evaluated and the results are also discussed.
Full Immersive Virtual Environment Cave[TM] in Chemistry Education
ERIC Educational Resources Information Center
Limniou, M.; Roberts, D.; Papadopoulos, N.
2008-01-01
By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…
A Collaborative Virtual Environment for Situated Language Learning Using VEC3D
ERIC Educational Resources Information Center
Shih, Ya-Chun; Yang, Mau-Tsuen
2008-01-01
A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…
Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study
ERIC Educational Resources Information Center
Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat
2013-01-01
Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…
Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida
2013-12-01
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
Mesoscopic Rigid Body Modelling of the Extracellular Matrix Self-Assembly.
Wong, Hua; Prévoteau-Jonquet, Jessica; Baud, Stéphanie; Dauchez, Manuel; Belloy, Nicolas
2018-06-11
The extracellular matrix (ECM) plays an important role in supporting tissues and organs. It even has a functional role in morphogenesis and differentiation by acting as a source of active molecules (matrikines). Many diseases are linked to dysfunction of ECM components and fragments or changes in their structures. As such it is a prime target for drugs. Because of technological limitations for observations at mesoscopic scales, the precise structural organisation of the ECM is not well-known, with sparse or fuzzy experimental observables. Based on the Unity3D game and physics engines, along with rigid body dynamics, we propose a virtual sandbox to model large biological molecules as dynamic chains of rigid bodies interacting together to gain insight into ECM components behaviour in the mesoscopic range. We have preliminary results showing how parameters such as fibre flexibility or the nature and number of interactions between molecules can induce different structures in the basement membrane. Using the Unity3D game engine and virtual reality headset coupled with haptic controllers, we immerse the user inside the corresponding simulation. Untrained users are able to navigate a complex virtual sandbox crowded with large biomolecules models in a matter of seconds.
Virtual Boutique: a 3D modeling and content-based management approach to e-commerce
NASA Astrophysics Data System (ADS)
Paquet, Eric; El-Hakim, Sabry F.
2000-12-01
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
Zhong, Chunyan; Guo, Yanli; Huang, Haiyun; Tan, Liwen; Wu, Yi; Wang, Wenting
2013-01-01
To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE). Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG). 3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P < 0.05) and in the mild-to-moderate stenosis group (P < 0.05). These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD.
Three-dimensional (3D) printed endovascular simulation models: a feasibility study.
Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob
2017-02-01
Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F
2018-05-08
Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Chairside multi-unit restoration of a quadrant using the new Cerec 3D software.
Ender, A; Wiedhahn, K; Mörmann, W H
2003-01-01
The new Cerec 3D design software for inlays and partial and full crowns simplifies work when producing several restorations in one session. Quite significant progress has been achieved, in that the entire row of teeth of a quadrant can be acquired completely and displayed by successively overlapping optical impressions. The digital working model of a quadrant in which all preparations are acquired is the result. The restorations can be designed individually and inserted virtually. Thanks to virtual insertion, the proximal contacts to neighboring restorations can be designed perfectly and all restorations finally designed, milled, and inserted in one sitting. This method provides a significant rationalization effect.
ERIC Educational Resources Information Center
Lu, Lilly
2013-01-01
3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most participants…
Virtual Environment for Surgical Room of the Future.
1995-10-01
Design; 1. wire frame Dynamic Interaction 2. surface B. Acoustic Three-Dimensional Modeling; 3. solid based on radiosity modeling B. Dynamic...infection control of people and E. Rendering and Shadowing equipment 1. ray tracing D. Fluid Flow 2. radiosity F. Animation OBJECT RECOGNITION COMMUNICATION
NASA Astrophysics Data System (ADS)
Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.
2016-02-01
Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Malatesta, S. G.; Lella, F.; Fanini, B.; Sala, F.; Dodero, E.; Petacco, L.
2018-05-01
The best way to disseminate culture is, nowadays, the creation of scenarios with virtual and augmented reality that supply the visitors of museums with a powerful, interactive tool that allows to learn sometimes difficult concepts in an easy, entertaining way. 3D models derived from reality-based techniques are nowadays used to preserve, document and restore historical artefacts. These digital contents are also powerful instrument to interactively communicate their significance to non-specialist, making easier to understand concepts sometimes complicated or not clear. Virtual and Augmented Reality are surely a valid tool to interact with 3D models and a fundamental help in making culture more accessible to the wide public. These technologies can help the museum curators to adapt the cultural proposal and the information about the artefacts based on the different type of visitor's categories. These technologies allow visitors to travel through space and time and have a great educative function permitting to explain in an easy and attractive way information and concepts that could prove to be complicated. The aim of this paper is to create a virtual scenario and an augmented reality app to recreate specific spaces in the Capitoline Museum in Rome as they were during Winckelmann's time, placing specific statues in their original position in the 18th century.
Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy
NASA Astrophysics Data System (ADS)
Higgins, William E.; Helferty, James P.; Padfield, Dirk R.
2003-05-01
Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.
Shinbane, Jerold S; Saxon, Leslie A
Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M; Teichgraeber, John F; Gateno, Jaime; Xia, James J
2017-12-01
There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M.; Teichgraeber, John F.; Gateno, Jaime
2017-01-01
Purpose There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. Methods The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. Result When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. Conclusion We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities. PMID:28432489
Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H
2015-10-01
Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
3D Printing of Biomolecular Models for Research and Pedagogy
Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel
2017-01-01
The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403
ERIC Educational Resources Information Center
Thomas, Wayne W.; Boechler, Patricia M.
2014-01-01
With teachers taking more interest in utilizing 3D virtual environments for educational purposes, research is needed to understand how learners perceive and process information within virtual environments (Eschenbrenner, Nah, & Siau, 2008). In this study, the authors sought to determine if learning style or digital literacy predict incidental…
Teaching Physics to Deaf College Students in a 3-D Virtual Lab
ERIC Educational Resources Information Center
Robinson, Vicki
2013-01-01
Virtual worlds are used in many educational and business applications. At the National Technical Institute for the Deaf at Rochester Institute of Technology (NTID/RIT), deaf college students are introduced to the virtual world of Second Life, which is a 3-D immersive, interactive environment, accessed through computer software. NTID students use…
ERIC Educational Resources Information Center
Barkand, Jonathan; Kush, Joseph
2009-01-01
Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…
Dixon, Benjamin J; Chan, Harley; Daly, Michael J; Qiu, Jimmy; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C
2016-07-01
Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. N/A. Laryngoscope, 126:1510-1515, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Ruiz, Jorge G; Andrade, Allen D; Anam, Ramankumar; Aguiar, Rudxandra; Sun, Huaping; Roos, Bernard A
2012-01-01
The prevalence of obesity and associated health complications are currently at unprecedented levels. Physical activity in this population can improve patient outcomes. Virtual reality (VR) self-modeling may improve self-efficacy and adherence to physical activity. We conducted a comparative study of 30 participants randomized to 3 versions of a 3D avatar-based VR intervention about exercise: virtual representation of the self exercising condition; virtual representation of other person exercising and control condition. Participants in the virtual representation of the self group significantly increased their levels of physical activity. The improvement in physical activity for participants in the visual representation of other person exercising was marginal. The improvement for the control group was not significant. However, the effect sizes for comparing the pre and post intervention physical activity levels were quite large for all three groups. We did not find any group difference in the improvements of physical activity levels and self-efficacy among sedentary, overweight or obese individuals.
From tissue to silicon to plastic: three-dimensional printing in comparative anatomy and physiology
Lauridsen, Henrik; Hansen, Kasper; Nørgård, Mathias Ørum; Wang, Tobias; Pedersen, Michael
2016-01-01
Comparative anatomy and physiology are disciplines related to structures and mechanisms in three-dimensional (3D) space. For the past centuries, scientific reports in these fields have relied on written descriptions and two-dimensional (2D) illustrations, but in recent years 3D virtual modelling has entered the scene. However, comprehending complex anatomical structures is hampered by reproduction on flat inherently 2D screens. One way to circumvent this problem is in the production of 3D-printed scale models. We have applied computed tomography and magnetic resonance imaging to produce digital models of animal anatomy well suited to be printed on low-cost 3D printers. In this communication, we report how to apply such technology in comparative anatomy and physiology to aid discovery, description, comprehension and communication, and we seek to inspire fellow researchers in these fields to embrace this emerging technology. PMID:27069653
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
Samal, Himanshu Bhusan; Das, Jugal Kishore; Mahapatra, Rajani Kanta; Suar, Mrutyunjay
2015-01-01
The Mur enzymes of the peptidoglycan biosynthesis pathway constitute ideal targets for the design of new classes of antimicrobial inhibitors in Gram-negative bacteria. We built a homology model of MurD of Salmonella typhimurium LT2 using MODELLER (9v12) software. 'The homology model was subjected to energy minimization by molecular dynamics (MD) simulation study with GROMACS software for a simulation time of 20 ns in water environment. The model was subjected for virtual screening study from the Zinc Database using Dockblaster software. Inhibition assay for the best inhibitor, 3-(amino methyl)-n-(4-methoxyphenyl) aniline, by flow cytometric analysis revealed the effective inhibition of peptidoglycan biosynthesis. Results from this study provide new insights for the molecular understanding and development of new antibacterial drugs against the pathogen. Copyright © 2015 Elsevier Inc. All rights reserved.
Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java
NASA Astrophysics Data System (ADS)
Cao, Zaihui; hu, Zhongyan
Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori
NASA Astrophysics Data System (ADS)
Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.
2017-02-01
Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.
ERIC Educational Resources Information Center
Pares-Toral, Maria T.
2013-01-01
The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…
Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System
NASA Astrophysics Data System (ADS)
Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.
2017-08-01
In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.
Virtual reality in the operating room of the future.
Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R
1997-01-01
In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.
EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT
Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...
Virtual Jupiter - Real Learning
NASA Astrophysics Data System (ADS)
Ruzhitskaya, Lanika; Speck, A.; Laffey, J.
2010-01-01
How many earthlings went to visit Jupiter? None. How many students visited virtual Jupiter to fulfill their introductory astronomy courses’ requirements? Within next six months over 100 students from University of Missouri will get a chance to explore the planet and its Galilean Moons using a 3D virtual environment created especially for them to learn Kepler's and Newton's laws, eclipses, parallax, and other concepts in astronomy. The virtual world of Jupiter system is a unique 3D environment that allows students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The virtual learning environment let students to work individually or collaborate with their teammates. The 3D world is also a great opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of 3D environment is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3-dimensional environment.
Haptic simulation framework for determining virtual dental occlusion.
Wu, Wen; Chen, Hui; Cen, Yuhai; Hong, Yang; Khambay, Balvinder; Heng, Pheng Ann
2017-04-01
The surgical treatment of many dentofacial deformities is often complex due to its three-dimensional nature. To determine the dental occlusion in the most stable position is essential for the success of the treatment. Computer-aided virtual planning on individualized patient-specific 3D model can help formulate the surgical plan and predict the surgical change. However, in current computer-aided planning systems, it is not possible to determine the dental occlusion of the digital models in the intuitive way during virtual surgical planning because of absence of haptic feedback. In this paper, a physically based haptic simulation framework is proposed, which can provide surgeons with the intuitive haptic feedback to determine the dental occlusion of the digital models in their most stable position. To provide the physically realistic force feedback when the dental models contact each other during the searching process, the contact model is proposed to describe the dynamic and collision properties of the dental models during the alignment. The simulated impulse/contact-based forces are integrated into the unified simulation framework. A validation study has been conducted on fifteen sets of virtual dental models chosen at random and covering a wide range of the dental relationships found clinically. The dental occlusions obtained by an expert were employed as a benchmark to compare the virtual occlusion results. The mean translational and angular deviations of the virtual occlusion results from the benchmark were small. The experimental results show the validity of our method. The simulated forces can provide valuable insights to determine the virtual dental occlusion. The findings of this work and the validation of proposed concept lead the way for full virtual surgical planning on patient-specific virtual models allowing fully customized treatment plans for the surgical correction of dentofacial deformities.
Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.
Jackson, Bret; Keefe, Daniel F
2016-04-01
Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
3D Modelling of Kizildag Monument
NASA Astrophysics Data System (ADS)
Karauguz, Güngör; Kalayci, İbrahim; Öğütcü, Sermet
2016-10-01
The most important cultural property that the nations possess is their historical accumulation, and bringing these to light, taking measures to preserve them or at least maintain the continuity of transferring them to next generations by means of recent technic and technology, ought to be the business of present generations. Although, nowadays, intensive documentation and archiving studies are done by means of classical techniques, besides studies towards preserving historical objects, modelling one-to-one or scaled modelling were not possible until recently. Computing devices and the on-going reflection of this, which is acknowledged as digital technology, is widely used in many areas and makes it possible to document and archive historical works. Even virtual forms in quantitative environments can be transferred to next generations in a scaled and one-to-one modelled way. Within this scope, every single artefact categorization belonging to any era or civilization present in our country can be considered in separate study areas. Furthermore, any work or likewise can be evaluated in separate categories. Also, it is possible to construct travelable virtual 3D museums that make it possible to visit these artefacts. Under the auspices of these technologies, it is quite possible to construct single virtual indoor museums or also, at the final stage, a 3D travelable open-air museum, a platform or more precisely, to establish a data system that spreads all over the country on a broad spectrum. With a long-termed, significant and extensive study and a substantial organization, such a data system can be established, which also serves as a serious infrastructure for alternative tourism possibilities. Located beside a stepped altar and right above the Kizildag IV inscription, the offering pot is destructed and rolled away a few meters to the south slope of the mould. Every time visiting these artefacts with our undergraduate students, unfortunately, we observe more demolishment. This case study aims to construct the extensive data system mentioned above, and in the context of historical artefacts it aims-which is the lowest stage of such a study gathering information about the Kizildag findings using the previously mentioned technologies. This paper will explain how the geometry and texture of historical objects can be automatically constructed, modelled and visualized from digital image processing software. In this context, the second research has been conducted, aimed to obtain the visuals of the Hittite hieroglyph inscriptions located in Kizildag by using digital photogrammetry technique. After obtaining the visuals, they will be evaluated in a photogrammetric software which endues the finally constructed 3D virtual product with its original texture. In this way, the current destructed artefacts mentioned above can be handed down to the next generations in form of scaled, virtual models. We consider this to be of particular importance.
Game-Like Language Learning in 3-D Virtual Environments
ERIC Educational Resources Information Center
Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David
2013-01-01
This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…
Intelligent web agents for a 3D virtual community
NASA Astrophysics Data System (ADS)
Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar
2003-08-01
In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.
ESL Teacher Training in 3D Virtual Worlds
ERIC Educational Resources Information Center
Kozlova, Iryna; Priven, Dmitri
2015-01-01
Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…
Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology
ERIC Educational Resources Information Center
Fominykh, Mikhail; Prasolova-Forland, Ekaterina
2012-01-01
Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…
Authoring Adaptive 3D Virtual Learning Environments
ERIC Educational Resources Information Center
Ewais, Ahmed; De Troyer, Olga
2014-01-01
The use of 3D and Virtual Reality is gaining interest in the context of academic discussions on E-learning technologies. However, the use of 3D for learning environments also has drawbacks. One way to overcome these drawbacks is by having an adaptive learning environment, i.e., an environment that dynamically adapts to the learner and the…
Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts
NASA Astrophysics Data System (ADS)
hong, Zhou; Wenhua, Lu
2017-01-01
Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.
Kirchmair, Johannes; Markt, Patrick; Distinto, Simona; Wolber, Gerhard; Langer, Thierry
2008-01-01
Within the last few years a considerable amount of evaluative studies has been published that investigate the performance of 3D virtual screening approaches. Thereby, in particular assessments of protein-ligand docking are facing remarkable interest in the scientific community. However, comparing virtual screening approaches is a non-trivial task. Several publications, especially in the field of molecular docking, suffer from shortcomings that are likely to affect the significance of the results considerably. These quality issues often arise from poor study design, biasing, by using improper or inexpressive enrichment descriptors, and from errors in interpretation of the data output. In this review we analyze recent literature evaluating 3D virtual screening methods, with focus on molecular docking. We highlight problematic issues and provide guidelines on how to improve the quality of computational studies. Since 3D virtual screening protocols are in general assessed by their ability to discriminate between active and inactive compounds, we summarize the impact of the composition and preparation of test sets on the outcome of evaluations. Moreover, we investigate the significance of both classic enrichment parameters and advanced descriptors for the performance of 3D virtual screening methods. Furthermore, we review the significance and suitability of RMSD as a measure for the accuracy of protein-ligand docking algorithms and of conformational space sub sampling algorithms.
2008-01-01
Distributed Drug Discovery (D3) proposes solving large drug discovery problems by breaking them into smaller units for processing at multiple sites. A key component of the synthetic and computational stages of D3 is the global rehearsal of prospective reagents and their subsequent use in the creation of virtual catalogs of molecules accessible by simple, inexpensive combinatorial chemistry. The first section of this article documents the feasibility of the synthetic component of Distributed Drug Discovery. Twenty-four alkylating agents were rehearsed in the United States, Poland, Russia, and Spain, for their utility in the synthesis of resin-bound unnatural amino acids 1, key intermediates in many combinatorial chemistry procedures. This global reagent rehearsal, coupled to virtual library generation, increases the likelihood that any member of that virtual library can be made. It facilitates the realistic integration of worldwide virtual D3 catalog computational analysis with synthesis. The second part of this article describes the creation of the first virtual D3 catalog. It reports the enumeration of 24 416 acylated unnatural amino acids 5, assembled from lists of either rehearsed or well-precedented alkylating and acylating reagents, and describes how the resulting catalog can be freely accessed, searched, and downloaded by the scientific community. PMID:19105725
Visualized modeling platform for virtual plant growth and monitoring on the internet
NASA Astrophysics Data System (ADS)
Zhou, De-fu; Tian, Feng-qui; Ren, Ping
2009-07-01
Virtual plant growth is a key research topic in Agriculture Information Technique and Computer Graphics. It has been applied in botany, agronomy, environmental sciences, computre sciences and applied mathematics. Modeling leaf color dynamics in plant is of significant importance for realizing virtual plant growth. Using systematic analysis method and dynamic modeling technology, a SPAD-based leaf color dynamic model was developed to simulate time-course change characters of leaf SPAD on the plant. In addition, process of plant growth can be computer-stimulated using Virtual Reality Modeling Language (VRML) to establish a vivid and visible model, including shooting, rooting, blooming, as well as growth of the stems and leaves. In the resistance environment, e.g., lacking of water, air or nutrient substances, high salt or alkaline, freezing injury, high temperature, suffering from diseases and insect pests, the changes from the level of whole plant to organs, tissues and cells could be computer-stimulated. Changes from physiological and biochemistry could also be described. When a series of indexes were input by the costumers, direct view and microcosmic changes could be shown. Thus, the model has a good performance in predicting growth condition of the plant, laying a foundation for further constructing virtual plant growth system. The results revealed that realistic physiological and pathological processes of 3D virtual plants could be demonstrated by proper design and effectively realized in the internet.
Numerical Validation of a Near-Field Fugitive Dust Model for Vehicles Moving on Unpaved Surfaces
2013-02-05
vehicles such as the numerical models developed at George mason University[2, 3]. Their model focused on represent- ing vehicles in a virtual...vehicles trav- eling on unpaved roads. In Proceedings of the conference on metroplitan physical environment (1977), G . Heisler and L. Herrington, Eds., pp...295–302. [5] Etyemezian, V., Gillies, J., Huhns, H., Nikolic, D., Watson, J., Ve- ranth, J., Laben, R., Seshadri, G ., and Gillette, D. Field testing
3D for Geosciences: Interactive Tangibles and Virtual Models
NASA Astrophysics Data System (ADS)
Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.
2016-12-01
Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of professionals and audiences.
Real-time 3D image reconstruction guidance in liver resection surgery
Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-01-01
Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598
Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery
NASA Astrophysics Data System (ADS)
Halik, Łukasz; Smaczyński, Maciej
2017-12-01
The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.
NASA Astrophysics Data System (ADS)
Chen, Xing-Ru; Wang, Xiao-Ting; Hao, Mei-Qi; Zhou, Yong-Hui; Cui, Wen-Qiang; Xing, Xiao-Xu; Xu, Chang-Geng; Bai, Jing-Wen; Li, Yan-Hua
2017-11-01
The imidazole glycerophosphate dehydratase (IGPD) protein is a therapeutic target for herbicide discovery. It is also regarded as a possible target in Staphylococcus xylosus (S. xylosus) for solving mastitis in the dairy cow. The 3D structure of IGPD protein is essential for discovering novel inhibitors during high-throughput virtual screening. However, to date, the 3D structure of IGPD protein of S. xylosus has not been solved. In this study, a series of computational techniques including homology modeling, Ramachandran Plots, and Verify 3D were performed in order to construct an appropriate 3D model of IGPD protein of S. xylosus. Nine hits were identified from 2500 compounds by docking studies. Then, these 9 compounds were first tested in vitro in S. xylosus biofilm formation using crystal violet staining. One of the potential compounds, baicalin was shown to significantly inhibit S. xylosus biofilm formation. Finally, the baicalin was further evaluated, which showed better inhibition of biofilm formation capability in S. xylosus by scanning electron microscopy. Hence, we have predicted the structure of IGPD protein of S. xylosus using computational techniques. We further discovered the IGPD protein was targeted by baicalin compound which inhibited the biofilm formation in S. xylosus. Our findings here would provide implications for the further development of novel IGPD inhibitors for the treatment of dairy mastitis.
Chen, Xing-Ru; Wang, Xiao-Ting; Hao, Mei-Qi; Zhou, Yong-Hui; Cui, Wen-Qiang; Xing, Xiao-Xu; Xu, Chang-Geng; Bai, Jing-Wen; Li, Yan-Hua
2017-01-01
The imidazole glycerophosphate dehydratase (IGPD) protein is a therapeutic target for herbicide discovery. It is also regarded as a possible target in Staphylococcus xylosus ( S. xylosus ) for solving mastitis in the dairy cow. The 3D structure of IGPD protein is essential for discovering novel inhibitors during high-throughput virtual screening. However, to date, the 3D structure of IGPD protein of S. xylosus has not been solved. In this study, a series of computational techniques including homology modeling, Ramachandran Plots, and Verify 3D were performed in order to construct an appropriate 3D model of IGPD protein of S. xylosus . Nine hits were identified from 2,500 compounds by docking studies. Then, these nine compounds were first tested in vitro in S. xylosus biofilm formation using crystal violet staining. One of the potential compounds, baicalin was shown to significantly inhibit S. xylosus biofilm formation. Finally, the baicalin was further evaluated, which showed better inhibition of biofilm formation capability in S. xylosus by scanning electron microscopy. Hence, we have predicted the structure of IGPD protein of S. xylosus using computational techniques. We further discovered the IGPD protein was targeted by baicalin compound which inhibited the biofilm formation in S. xylosus . Our findings here would provide implications for the further development of novel IGPD inhibitors for the treatment of dairy mastitis.
NASA Astrophysics Data System (ADS)
Koehl, M.; Brigand, N.
2012-08-01
The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image to a panoramic virtual image. It also allows visualizing, in inlay, digital data, like ancient or recent plans, cross sections, descriptions, explanatory videos, sound comments, etc. This project has lead to very convincing results, that were validated by the historians and the archaeologists who have now an interactive tool, disseminated through internet, allowing at the same time to visit virtually the castle, but also to query the system which sends back localized information. The various levels of understanding and set up details, allow an approach of first level for broad Internet users, but also a deeper approach for a group of scientists who are associated to the development of the ruins of the castle and its environment.
Yu, Zhengyang; Zheng, Shusen; Chen, Huaiqing; Wang, Jianjun; Xiong, Qingwen; Jing, Wanjun; Zeng, Yu
2006-10-01
This research studies the process of dynamic concision and 3D reconstruction from medical body data using VRML and JavaScript language, focuses on how to realize the dynamic concision of 3D medical model built with VRML. The 2D medical digital images firstly are modified and manipulated by 2D image software. Then, based on these images, 3D mould is built with VRML and JavaScript language. After programming in JavaScript to control 3D model, the function of dynamic concision realized by Script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be formed in high quality near to those got in traditional methods. By this way, with the function of dynamic concision, VRML browser can offer better windows of man-computer interaction in real time environment than before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and has a promising prospect in the fields of medical image.
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
Yang, Ling-Ling; Li, Guo-Bo; Yan, Heng-Xiu; Sun, Qi-Zheng; Ma, Shuang; Ji, Pan; Wang, Ze-Rong; Feng, Shan; Zou, Jun; Yang, Sheng-Yong
2012-10-01
Aberrant activation of casein kinase 1 (CK1) has been demonstrated to be implicated in the pathogenesis of cancer and various central nervous system disorders. Discovery of CK1 inhibitors has thus attracted much attention in recent years. In this account, we describe the discovery of N6-phenyl-1H-pyrazolo[3,4-d]pyrimidine-3,6-diamine derivatives as novel CK1 inhibitors. An optimal common-feature pharmacophore hypothesis, termed Hypo2, was firstly generated, followed by virtual screening using Hypo2 against several chemical databases. One of the best hit compounds, N6-(4-chlorophenyl)-1H-pyrazolo[3,4-d]pyrimidine-3,6-diamine, was chosen for the subsequent hit-to-lead optimization under the guide of Hypo2, which led to the discovery of a new lead compound (1-(3-(3-amino-1H-pyrazolo[3,4-d]pyrimidin-6-ylamino)phenyl)-3-(3-chloro-4-fluorophenyl)urea) that potently inhibits CK1 with an IC(50) value of 78 nM. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Blackmon, Theodore
1998-01-01
Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.
Three-dimensional compound comparison methods and their application in drug discovery.
Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke
2015-07-16
Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.
Human Activity Modeling and Simulation with High Biofidelity
2013-01-01
Human activity Modeling and Simulation (M&S) plays an important role in simulation-based training and Virtual Reality (VR). However, human activity M...kinematics and motion mapping/creation; and (e) creation and replication of human activity in 3-D space with true shape and motion. A brief review is
Do Haptic Representations Help Complex Molecular Learning?
ERIC Educational Resources Information Center
Bivall, Petter; Ainsworth, Shaaron; Tibell, Lena A. E.
2011-01-01
This study explored whether adding a haptic interface (that provides users with somatosensory information about virtual objects by force and tactile feedback) to a three-dimensional (3D) chemical model enhanced students' understanding of complex molecular interactions. Two modes of the model were compared in a between-groups pre- and posttest…
Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors
Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang
2015-01-01
The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors. PMID:26110383
Zhong, Chunyan; Guo, Yanli; Huang, Haiyun; Tan, Liwen; Wu, Yi; Wang, Wenting
2013-01-01
Objectives. To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE). Methods. Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG). Results. 3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P < 0.05) and in the mild-to-moderate stenosis group (P < 0.05). Conclusions. These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD. PMID:24348745
Embodied collaboration support system for 3D shape evaluation in virtual space
NASA Astrophysics Data System (ADS)
Okubo, Masashi; Watanabe, Tomio
2005-12-01
Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.
A 3-point derivation of dominant tree height equations
Don C. Bragg
2011-01-01
This paper describes a new approach for deriving height-diameter (H-D) equations from limited information and a few assumptions about tree height. Only three data points are required to fit this model, which can be based on virtually any nonlinear function. These points are the height of a tree at diameter at breast height (d.b.h.), the predicted height of a 10-inch d....
Discovery of new GSK-3β inhibitors through structure-based virtual screening.
Dou, Xiaodong; Jiang, Lan; Wang, Yanxing; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren
2018-01-15
Glycogen synthase kinase-3β (GSK-3β) is an attractive therapeutic target for human diseases, such as diabetes, cancer, neurodegenerative diseases, and inflammation. Thus, structure-based virtual screening was performed to identify novel scaffolds of GSK-3β inhibitors, and we observed that conserved water molecules of GSK-3β were suitable for virtual screening. We found 14 hits and D1 (IC 50 of 0.71 μM) were identified. Furthermore, the neuroprotection activity of D1-D3 was validated on a cellular level. 2D similarity searches were used to find derivatives of high inhibitory compounds and an enriched structure-activity relationship suggested that these skeletons were worthy of study as potent GSK-3β inhibitors. Copyright © 2017. Published by Elsevier Ltd.
Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min
2010-01-01
The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.
Augmented reality-guided artery-first pancreatico-duodenectomy.
Marzano, Ettore; Piardi, Tullio; Soler, Luc; Diana, Michele; Mutter, Didier; Marescaux, Jacques; Pessaux, Patrick
2013-11-01
Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD.
NASA Astrophysics Data System (ADS)
Feld, R.; Slob, E. C.; Thorbecke, J.
2015-12-01
Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. A much appreciated benefit of interferometry is its independence of the actual source locations. The use of ambient noise as actual source is therefore not uncommon in this field. Ambient noise can be commercial noise, like for example mobile phone signals. For GPR this can be useful in cases where it is not possible to place a source, for instance when it is prohibited by laws and regulations. A mono-static GPR antenna can measure ambient noise. Interferometry by auto-correlation (AC) places a virtual source on this antenna's position, without actually transmitting anything. This can be used for pavement damage inspection. Earlier work showed very promising results with 2D numerical models of damaged pavement. 1D and 2D heterogeneities were compared, both modelled in a 2D pavement world. In a 1D heterogeneous model energy leaks away to the sides, whereas in a 2D heterogeneous model rays can reflect and therefore still add to the signal reconstruction (see illustration). In the first case the amount of stationary points is strictly limited, while in the other case the amount of stationary points is very large. We extend these models to a 3D world and optimise an experimental configuration. The illustration originates from the journal article under submission 'Non-destructive pavement damage inspection by mono-static GPR without transmitting anything' by R. Feld, E.C. Slob, and J.W. Thorbecke. (a) 2D heterogeneous pavement model with three irregular-shaped misalignments between the base and subbase layer (marked by arrows). Mono-antenna B-scan positions are shown schematically. (b) Ideal output: a real source at the receiver's position. The difference w.r.t. the trace found in the middle is shown. (c) AC output: a virtual source at the receiver's position. There is a clear overlap with the ideal output.
Swat, Maciej H; Thomas, Gilberto L; Shirinifard, Abbas; Clendenon, Sherry G; Glazier, James A
2015-01-01
Tumor cells and structure both evolve due to heritable variation of cell behaviors and selection over periods of weeks to years (somatic evolution). Micro-environmental factors exert selection pressures on tumor-cell behaviors, which influence both the rate and direction of evolution of specific behaviors, especially the development of tumor-cell aggression and resistance to chemotherapies. In this paper, we present, step-by-step, the development of a multi-cell, virtual-tissue model of tumor somatic evolution, simulated using the open-source CompuCell3D modeling environment. Our model includes essential cell behaviors, microenvironmental components and their interactions. Our model provides a platform for exploring selection pressures leading to the evolution of tumor-cell aggression, showing that emergent stratification into regions with different cell survival rates drives the evolution of less cohesive cells with lower levels of cadherins and higher levels of integrins. Such reduced cohesivity is a key hallmark in the progression of many types of solid tumors.
Swat, Maciej H.; Thomas, Gilberto L.; Shirinifard, Abbas; Clendenon, Sherry G.; Glazier, James A.
2015-01-01
Tumor cells and structure both evolve due to heritable variation of cell behaviors and selection over periods of weeks to years (somatic evolution). Micro-environmental factors exert selection pressures on tumor-cell behaviors, which influence both the rate and direction of evolution of specific behaviors, especially the development of tumor-cell aggression and resistance to chemotherapies. In this paper, we present, step-by-step, the development of a multi-cell, virtual-tissue model of tumor somatic evolution, simulated using the open-source CompuCell3D modeling environment. Our model includes essential cell behaviors, microenvironmental components and their interactions. Our model provides a platform for exploring selection pressures leading to the evolution of tumor-cell aggression, showing that emergent stratification into regions with different cell survival rates drives the evolution of less cohesive cells with lower levels of cadherins and higher levels of integrins. Such reduced cohesivity is a key hallmark in the progression of many types of solid tumors. PMID:26083246
NASA Astrophysics Data System (ADS)
Palakurthi, Nikhil Kumar; Ghia, Urmila; Comer, Ken
2013-11-01
Capillary penetration of liquid through fibrous porous media is important in many applications such as printing, drug delivery patches, sanitary wipes, and performance fabrics. Historically, capillary transport (with a distinct liquid propagating front) in porous media is modeled using capillary-bundle theory. However, it is not clear if the capillary model (Washburn equation) describes the fluid transport in porous media accurately, as it assumes uniformity of pore sizes in the porous medium. The present work investigates the limitations of the applicability of the capillary model by studying liquid penetration through virtual fibrous media with uniform and non-uniform pore-sizes. For the non-uniform-pore fibrous medium, the effective capillary radius of the fibrous medium was estimated from the pore-size distribution curve. Liquid penetration into the 3D virtual fibrous medium at micro-scale was simulated using OpenFOAM, and the numerical results were compared with the Washburn-equation capillary-model predictions. Preliminary results show that the Washburn equation over-predicts the height rise in the early stages (purely inertial and visco-inertial stages) of capillary transport.
A computer-based training system combining virtual reality and multimedia
NASA Technical Reports Server (NTRS)
Stansfield, Sharon A.
1993-01-01
Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.
Expeditious illustration of layer-cake models on and above a tactile surface
NASA Astrophysics Data System (ADS)
Lopes, Daniel Simões; Mendes, Daniel; Sousa, Maurício; Jorge, Joaquim
2016-05-01
Too often illustrating and visualizing 3D geological concepts are performed by sketching in 2D mediums, which may limit drawing performance of initial concepts. Here, the potential of expeditious geological modeling brought by hand gestures is explored. A spatial interaction system was developed to enable rapid modeling, editing, and exploration of 3D layer-cake objects. User interactions are acquired with motion capture and touch screen technologies. Virtual immersion is guaranteed by using stereoscopic technology. The novelty consists of performing expeditious modeling of coarse geological features with only a limited set of hand gestures. Results from usability-studies show that the proposed system is more efficient when compared to a windows-icon-menu-pointer modeling application.
Development of a 3D GIS and its application to karst areas
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zhou, Wanfang
2008-05-01
There is a growing interest in modeling and analyzing karst phenomena in three dimensions. This paper integrates geology, groundwater hydrology, geographic information system (GIS), database management system (DBMS), visualization and data mining to study karst features in Huaibei, China. The 3D geo-objects retrieved from the karst area are analyzed and mapped into different abstract levels. The spatial relationships among the objects are constructed by a dual-linker. The shapes of the 3D objects and the topological models with attributes are stored and maintained in the DBMS. Spatial analysis was then used to integrate the data in the DBMS and the 3D model to form a virtual reality (VR) to provide analytical functions such as distribution analysis, correlation query, and probability assessment. The research successfully implements 3D modeling and analyses in the karst area, and meanwhile provides an efficient tool for government policy-makers to set out restrictions on water resource development in the area.
Heritage House Maintenance Using 3d City Model Application Domain Extension Approach
NASA Astrophysics Data System (ADS)
Mohd, Z. H.; Ujang, U.; Liat Choon, T.
2017-11-01
Heritage house is part of the architectural heritage of Malaysia that highly valued. Many efforts by the Department of Heritage to preserve this heritage house such as monitoring the damage problems of heritage house. The damage problems of heritage house might be caused by wooden decay, roof leakage and exfoliation of wall. One of the initiatives for maintaining and documenting this heritage house is through Three-dimensional (3D) of technology. 3D city models are widely used now and much used by researchers for management and analysis. CityGML is a standard tool that usually used by researchers to exchange, storing and managing virtual 3D city models either geometric and semantic information. Moreover, it also represent multi-scale of 3D model in five level of details (LoDs) whereby each of level give a distinctive functions. The extension of CityGML was recently introduced and can be used for problems monitoring and the number of habitants of a house.
Implementation of augmented reality to models sultan deli
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.
2018-03-01
Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.
Computational techniques to enable visualizing shapes of objects of extra spatial dimensions
NASA Astrophysics Data System (ADS)
Black, Don Vaughn, II
Envisioning extra dimensions beyond the three of common experience is a daunting challenge for three dimensional observers. Intuition relies on experience gained in a three dimensional environment. Gaining experience with virtual four dimensional objects and virtual three manifolds in four-space on a personal computer may provide the basis for an intuitive grasp of four dimensions. In order to enable such a capability for ourselves, it is first necessary to devise and implement a computationally tractable method to visualize, explore, and manipulate objects of dimension beyond three on the personal computer. A technology is described in this dissertation to convert a representation of higher dimensional models into a format that may be displayed in realtime on graphics cards available on many off-the-shelf personal computers. As a result, an opportunity has been created to experience the shape of four dimensional objects on the desktop computer. The ultimate goal has been to provide the user a tangible and memorable experience with mathematical models of four dimensional objects such that the user can see the model from any user selected vantage point. By use of a 4D GUI, an arbitrary convex hull or 3D silhouette of the 4D model can be rotated, panned, scrolled, and zoomed until a suitable dimensionally reduced view or Aspect is obtained. The 4D GUI then allows the user to manipulate a 3-flat hyperplane cutting tool to slice the model at an arbitrary orientation and position to extract or "pluck" an embedded 3D slice or "aspect" from the embedding four-space. This plucked 3D aspect can be viewed from all angles via a conventional 3D viewer using three multiple POV viewports, and optionally exported to a third party CAD viewer for further manipulation. Plucking and Manipulating the Aspect provides a tangible experience for the end-user in the same manner as any 3D Computer Aided Design viewing and manipulation tool does for the engineer or a 3D video game provides for the nascent student.
Pieczywek, Piotr M; Zdunek, Artur
2017-10-18
A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of neighboring cells is one of the most prevalent ways for tissue to rupture.
a Historical Timber Frame Model for Diagnosis and Documentation Before Building Restoration
NASA Astrophysics Data System (ADS)
Koehl, M.; Viale, A.; Reeb, S.
2013-09-01
The aim of the project that is described in this paper was to define a four-level timber frame survey mode of a historical building: the so-called "Andlau's Seigniory", Alsace, France. This historical building (domain) was built in the late XVIth century and is now in a stage of renovation in order to become a heritage interpretation centre. The used measurement methods combine Total Station measurements, Photogrammetry and 3D Terrestrial Laser scanner. Different modelling workflows were tested and compared according to the data acquisition method, but also according to the characteristics of the reconstructed model in terms of accuracy and level of detail. 3D geometric modelling of the entire structure was performed including modelling the degree of detail adapted to the needs. The described 3D timber framework exists now in different versions, from a theoretical and geometrical one up to a very detailed one, in which measurements and evaluation of deformation by time are potentially allowed. The virtually generated models involving archaeologists, architects, historians and specialists in historical crafts, are intended to be used during the four stages of the project: (i) knowledge of the current state of needs for diagnosis and understanding of former construction techniques; (ii) preparation and evaluation of restoration steps; (iii) knowledge and documentation concerning the archaeological object; (iv) transmission and dissemination of knowledge through the implementation of museum animations. Among the generated models we can also find a documentation of the site in the form of virtual tours created from panoramic photographs before and during the restoration works. Finally, the timber framework model was structured and integrated into a 3D GIS, where the association of descriptive and complementary digital documents was possible. Both offer tools leading to the diagnosis, the understanding of the structure, knowledge dissemination, documentation and the creation of educational activities. The integration of these measurements in a historical information system will lead to the creation of an interactive model and the creation of a digital visual display unit for consultation. It will be offered to any public to understand interactively the art of constructing a Renaissance structure, with detailed photos, descriptive texts and graphics. The 3D digital model of the framework will be used directly in the interpretation path, within the space dedicated to "Seigniory" of Andlau. An interactive touch-screen will be installed. It will incorporate several levels of playgrounds (playful, evocative and teaching). In a virtual way, it will deal with the different stages of building a wooden framework and clarify the art of construction.
Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method
ERIC Educational Resources Information Center
Dan, A.; Reiner, M.
2018-01-01
Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…
Employing Virtual Humans for Education and Training in X3D/VRML Worlds
ERIC Educational Resources Information Center
Ieronutti, Lucio; Chittaro, Luca
2007-01-01
Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…
Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"
ERIC Educational Resources Information Center
Minocha, Shailey; Reeves, Ahmad John
2010-01-01
"Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or…
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
Research on virtual Guzheng based on Kinect
NASA Astrophysics Data System (ADS)
Li, Shuyao; Xu, Kuangyi; Zhang, Heng
2018-05-01
There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.
ERIC Educational Resources Information Center
Liu, Chang; Zhong, Ying; Ozercan, Sertac; Zhu, Qing
2013-01-01
This paper presents a template-based solution to overcome technical barriers non-technical computer end users face when developing functional learning environments in three-dimensional virtual worlds (3DVW). "iVirtualWorld," a prototype of a platform-independent 3DVW creation tool that implements the proposed solution, facilitates 3DVW…
A web system of virtual morphometric globes for Mars and the Moon
NASA Astrophysics Data System (ADS)
Florinsky, I. V.; Garov, A. S.; Karachevtseva, I. P.
2018-09-01
We developed a web system of virtual morphometric globes for Mars and the Moon. As the initial data, we used 15-arc-minutes gridded global digital elevation models (DEMs) extracted from the Mars Orbiter Laser Altimeter (MOLA) and the Lunar Orbiter Laser Altimeter (LOLA) gridded archives. We derived global digital models of sixteen morphometric variables including horizontal, vertical, minimal, and maximal curvatures, as well as catchment area and topographic index. The morphometric models were integrated into the web system developed as a distributed application consisting of a client front-end and a server back-end. The following main functions are implemented in the system: (1) selection of a morphometric variable; (2) two-dimensional visualization of a calculated global morphometric model; (3) 3D visualization of a calculated global morphometric model on the sphere surface; (4) change of a globe scale; and (5) globe rotation by an arbitrary angle. Free, real-time web access to the system is provided. The web system of virtual morphometric globes can be used for geological and geomorphological studies of Mars and the Moon at the global, continental, and regional scales.
DockoMatic 2.0: high throughput inverse virtual screening and homology modeling.
Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T; McDougal, Owen M; Andersen, Timothy L
2013-08-26
DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.
NASA Astrophysics Data System (ADS)
Mastmeyer, Andre; Wilms, Matthias; Handels, Heinz
2018-03-01
Virtual reality (VR) training simulators of liver needle insertion in the hepatic area of breathing virtual patients often need 4D image data acquisitions as a prerequisite. Here, first a population-based breathing virtual patient 4D atlas is built and second the requirement of a dose-relevant or expensive acquisition of a 4D CT or MRI data set for a new patient can be mitigated by warping the mean atlas motion. The breakthrough contribution of this work is the construction and reuse of population-based, learned 4D motion models.
Routine clinical application of virtual reality in abdominal surgery.
Sampogna, Gianluca; Pugliese, Raffaele; Elli, Marco; Vanzulli, Angelo; Forgione, Antonello
2017-06-01
The advantages of 3D reconstruction, immersive virtual reality (VR) and 3D printing in abdominal surgery have been enunciated for many years, but still today their application in routine clinical practice is almost nil. We investigate their feasibility, user appreciation and clinical impact. Fifteen patients undergoing pancreatic, hepatic or renal surgery were studied realizing a 3D reconstruction of target anatomy. Then, an immersive VR environment was developed to import 3D models, and some details of the 3D scene were printed. All the phases of our workflow employed open-source software and low-cost hardware, easily implementable by other surgical services. A qualitative evaluation of the three approaches was performed by 20 surgeons, who filled in a specific questionnaire regarding a clinical case for each organ considered. Preoperative surgical planning and intraoperative guidance was feasible for all patients included in the study. The vast majority of surgeons interviewed scored their quality and usefulness as very good. Despite extra time, costs and efforts necessary to implement these systems, the benefits shown by the analysis of questionnaires recommend to invest more resources to train physicians to adopt these technologies routinely, even if further and larger studies are still mandatory.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
Using voice input and audio feedback to enhance the reality of a virtual experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miner, N.E.
1994-04-01
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Hammoudeh, Jeffrey A.; Howell, Lori K.; Boutros, Shadi; Scott, Michelle A.
2015-01-01
Background: Orthognathic surgery has traditionally been performed using stone model surgery. This involves translating desired clinical movements of the maxilla and mandible into stone models that are then cut and repositioned into class I occlusion from which a splint is generated. Model surgery is an accurate and reproducible method of surgical correction of the dentofacial skeleton in cleft and noncleft patients, albeit considerably time-consuming. With the advent of computed tomography scanning, 3D imaging and virtual surgical planning (VSP) have gained a foothold in orthognathic surgery with VSP rapidly replacing traditional model surgery in many parts of the country and the world. What has yet to be determined is whether the application and feasibility of virtual model surgery is at a point where it will eliminate the need for traditional model surgery in both the private and academic setting. Methods: Traditional model surgery was compared with VSP splint fabrication to determine the feasibility of use and accuracy of application in orthognathic surgery within our institution. Results: VSP was found to generate acrylic splints of equal quality to model surgery splints in a fraction of the time. Drawbacks of VSP splint fabrication are the increased cost of production and certain limitations as it relates to complex craniofacial patients. Conclusions: It is our opinion that virtual model surgery will displace and replace traditional model surgery as it will become cost and time effective in both the private and academic setting for practitioners providing orthognathic surgical care in cleft and noncleft patients. PMID:25750846
NASA Astrophysics Data System (ADS)
Cawood, Adam J.; Bond, Clare E.
2018-01-01
Stratigraphic influence on structural style and strain distribution in deformed sedimentary sequences is well established, in models of 2D mechanical stratigraphy. In this study we attempt to refine existing models of stratigraphic-structure interaction by examining outcrop scale 3D variations in sedimentary architecture and the effects on subsequent deformation. At Monkstone Point, Pembrokeshire, SW Wales, digital mapping and virtual scanline data from a high resolution virtual outcrop have been combined with field observations, sedimentary logs and thin section analysis. Results show that significant variation in strain partitioning is controlled by changes, at a scale of tens of metres, in sedimentary architecture within Upper Carboniferous fluvio-deltaic deposits. Coupled vs uncoupled deformation of the sequence is defined by the composition and lateral continuity of mechanical units and unit interfaces. Where the sedimentary sequence is characterized by gradational changes in composition and grain size, we find that deformation structures are best characterized by patterns of distributed strain. In contrast, distinct compositional changes vertically and in laterally equivalent deposits results in highly partitioned deformation and strain. The mechanical stratigraphy of the study area is inherently 3D in nature, due to lateral and vertical compositional variability. Consideration should be given to 3D variations in mechanical stratigraphy, such as those outlined here, when predicting subsurface deformation in multi-layers.
Development of a Virtual Museum Including a 4d Presentation of Building History in Virtual Reality
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Tschirschwitz, F.; Deggim, S.
2017-02-01
In the last two decades the definition of the term "virtual museum" changed due to rapid technological developments. Using today's available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum "Alt-Segeberger Bürgerhaus", a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.
Männel, Barbara; Jaiteh, Mariama; Zeifman, Alexey; Randakova, Alena; Möller, Dorothee; Hübner, Harald; Gmeiner, Peter; Carlsson, Jens
2017-10-20
Functionally selective ligands stabilize conformations of G protein-coupled receptors (GPCRs) that induce a preference for signaling via a subset of the intracellular pathways activated by the endogenous agonists. The possibility to fine-tune the functional activity of a receptor provides opportunities to develop drugs that selectively signal via pathways associated with a therapeutic effect and avoid those causing side effects. Animal studies have indicated that ligands displaying functional selectivity at the D 2 dopamine receptor (D 2 R) could be safer and more efficacious drugs against neuropsychiatric diseases. In this work, computational design of functionally selective D 2 R ligands was explored using structure-based virtual screening. Molecular docking of known functionally selective ligands to a D 2 R homology model indicated that such compounds were anchored by interactions with the orthosteric site and extended into a common secondary pocket. A tailored virtual library with close to 13 000 compounds bearing 2,3-dichlorophenylpiperazine, a privileged orthosteric scaffold, connected to diverse chemical moieties via a linker was docked to the D 2 R model. Eighteen top-ranked compounds that occupied both the orthosteric and allosteric site were synthesized, leading to the discovery of 16 partial agonists. A majority of the ligands had comparable maximum effects in the G protein and β-arrestin recruitment assays, but a subset displayed preference for a single pathway. In particular, compound 4 stimulated β-arrestin recruitment (EC 50 = 320 nM, E max = 16%) but had no detectable G protein signaling. The use of structure-based screening and virtual libraries to discover GPCR ligands with tailored functional properties will be discussed.
Motion-Capture-Enabled Software for Gestural Control of 3D Models
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony
2012-01-01
Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.
"torino 1911" Project: a Contribution of a Slam-Based Survey to Extensive 3d Heritage Modeling
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Della Coletta, C.; Sammartano, G.; Spanò, A.; Spreafico, A.
2018-05-01
In the framework of the digital documentation of complex environments the advanced Geomatics researches offers integrated solution and multi-sensor strategies for the 3D accurate reconstruction of stratified structures and articulated volumes in the heritage domain. The use of handheld devices for rapid mapping, both image- and range-based, can help the production of suitable easy-to use and easy-navigable 3D model for documentation projects. These types of reality-based modelling could support, with their tailored integrated geometric and radiometric aspects, valorisation and communication projects including virtual reconstructions, interactive navigation settings, immersive reality for dissemination purposes and evoking past places and atmospheres. The aim of this research is localized within the "Torino 1911" project, led by the University of San Diego (California) in cooperation with the PoliTo. The entire project is conceived for multi-scale reconstruction of the real and no longer existing structures in the whole park space of more than 400,000 m2, for a virtual and immersive visualization of the Turin 1911 International "Fabulous Exposition" event, settled in the Valentino Park. Particularly, in the presented research, a 3D metric documentation workflow is proposed and validated in order to integrate the potentialities of LiDAR mapping by handheld SLAM-based device, the ZEB REVO Real Time instrument by GeoSLAM (2017 release), instead of TLS consolidated systems. Starting from these kind of models, the crucial aspects of the trajectories performances in the 3D reconstruction and the radiometric content from imaging approaches are considered, specifically by means of compared use of common DSLR cameras and portable sensors.
EPE analysis of sub-N10 BEoL flow with and without fully self-aligned via using Coventor SEMulator3D
NASA Astrophysics Data System (ADS)
Franke, Joern-Holger; Gallagher, Matt; Murdoch, Gayle; Halder, Sandip; Juncker, Aurelie; Clark, William
2017-03-01
During the last few decades, the semiconductor industry has been able to scale device performance up while driving costs down. What started off as simple geometrical scaling, driven mostly by advances in lithography, has recently been accompanied by advances in processing techniques and in device architectures. The trend to combine efforts using process technology and lithography is expected to intensify, as further scaling becomes ever more difficult. One promising component of future nodes are "scaling boosters", i.e. processing techniques that enable further scaling. An indispensable component in developing these ever more complex processing techniques is semiconductor process modeling software. Visualization of complex 3D structures in SEMulator3D, along with budget analysis on film thicknesses, CD and etch budgets, allow process integrators to compare flows before any physical wafers are run. Hundreds of "virtual" wafers allow comparison of different processing approaches, along with EUV or DUV patterning options for defined layers and different overlay schemes. This "virtual fabrication" technology produces massively parallel process variation studies that would be highly time-consuming or expensive in experiment. Here, we focus on one particular scaling booster, the fully self-aligned via (FSAV). We compare metal-via-metal (mevia-me) chains with self-aligned and fully-self-aligned via's using a calibrated model for imec's N7 BEoL flow. To model overall variability, 3D Monte Carlo modeling of as many variability sources as possible is critical. We use Coventor SEMulator3D to extract minimum me-me distances and contact areas and show how fully self-aligned vias allow a better me-via distance control and tighter via-me contact area variability compared with the standard self-aligned via (SAV) approach.
AR Based App for Tourist Attraction in ESKİ ÇARŞI (Safranbolu)
NASA Astrophysics Data System (ADS)
Polat, Merve; Rakıp Karaş, İsmail; Kahraman, İdris; Alizadehashrafi, Behnam
2016-10-01
This research is dealing with 3D modeling of historical and heritage landmarks of Safranbolu that are registered by UNESCO. This is an Augmented Reality (AR) based project in order to trigger virtual three-dimensional (3D) models, cultural music, historical photos, artistic features and animated text information. The aim is to propose a GIS-based approach with these features and add to the system as attribute data in a relational database. The database will be available in an AR-based application to provide information for the tourists.
2015-06-01
exposure settings…………………...26 Table 4. Kodak 9500 Cone Beam 3D System exposure settings…………..….27 Table 5. Average and statistical analysis results...42 Figure 6 Image of Mounted PVC Skull Model on the Kodak 9500……….…......43 Figure 7 Screen image of Reconstructed CBCT Digital...replica was taken with the Kodak 9500 Cone Beam 3D System. To create the digital dental models fifteen type IV maxillary dental casts were made on the
A virtual simulator designed for collision prevention in proton therapy.
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho
2015-10-01
In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.
D Model Generation from Uav: Historical Mosque (masjid LAMA Nilai)
NASA Astrophysics Data System (ADS)
Nasir, N. H. Mohd; Tahar, K. N.
2017-08-01
Preserving cultural heritage and historic sites is an important issue. These sites are subjected to erosion and vandalism, and, as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using the 3-D model building technology as they currently are, so that preservationists can track changes, foresee structural problems, and allow a wider audience to "virtually" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This study discusses new methods that can reduce the time to build a model using the Unmanned Aerial Vehicle method. This study aims to develop a 3D model of a historical mosque using UAV photogrammetry. In order to achieve this, the data acquisition set of Masjid Lama Nilai, Negeri Sembilan was captured by using an Unmanned Aerial Vehicle. In addition, accuracy assessment between the actual and measured values is made. Besides that, a comparison between the rendering 3D model and texturing 3D model is also carried out through this study.
ERIC Educational Resources Information Center
deNoyelles, Aimee; Seo, Kay Kyeong-Ju
2012-01-01
A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…
A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.
Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe
2008-08-05
The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.
The benefits of 3D modelling and animation in medical teaching.
Vernon, Tim; Peckham, Daniel
2002-12-01
Three-dimensional models created using materials such as wax, bronze and ivory, have been used in the teaching of medicine for many centuries. Today, computer technology allows medical illustrators to create virtual three-dimensional medical models. This paper considers the benefits of using still and animated output from computer-generated models in the teaching of medicine, and examines how three-dimensional models are made.
ERIC Educational Resources Information Center
Bouta, Hara; Retalis, Symeon; Paraskeva, Fotini
2012-01-01
This study examines the effect of using an online 3D virtual environment in teaching Mathematics in Primary Education. In particular, it explores the extent to which student engagement--behavioral, affective and cognitive--is fostered by such tools in order to enhance collaborative learning. For the study we used a purpose-created 3D virtual…
Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition
NASA Astrophysics Data System (ADS)
Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.
2017-05-01
In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).
Stereoscopic 3D graphics generation
NASA Astrophysics Data System (ADS)
Li, Zhi; Liu, Jianping; Zan, Y.
1997-05-01
Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.