Sample records for virtual reality 3d

  1. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  2. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  3. Agreement and reliability of pelvic floor measurements during contraction using three-dimensional pelvic floor ultrasound and virtual reality.

    PubMed

    Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B

    2012-07-01

    Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  4. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  5. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  6. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    PubMed

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  7. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    NASA Astrophysics Data System (ADS)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  8. Magical Stories: Blending Virtual Reality and Artificial Intelligence.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    Artificial intelligence (AI) techniques and virtual reality (VR) make possible powerful interactive stories, and this paper focuses on examples of virtual characters in three dimensional (3-D) worlds. Waldern, a virtual reality game designer, has theorized about and implemented software design of virtual teammates and opponents that incorporate AI…

  9. Virtual Reality: The Future of Animated Virtual Instructor, the Technology and Its Emergence to a Productive E-Learning Environment.

    ERIC Educational Resources Information Center

    Jiman, Juhanita

    This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…

  10. Virtual reality in the treatment of persecutory delusions: randomised controlled experimental study testing how to reduce delusional conviction.

    PubMed

    Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M

    2016-07-01

    Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Cognitive therapy using virtual reality could prove highly effective in treating delusions. © The Royal College of Psychiatrists 2016.

  11. Virtual reality in the treatment of persecutory delusions: randomised controlled experimental study testing how to reduce delusional conviction

    PubMed Central

    Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M.

    2016-01-01

    Background Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. Aims To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Method Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. Results In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Conclusion Cognitive therapy using virtual reality could prove highly effective in treating delusions. PMID:27151071

  12. Valorisation of Cultural Heritage Through Virtual Visit and Augmented Reality: the Case of the Abbey of Epau (france)

    NASA Astrophysics Data System (ADS)

    Simonetto, E.; Froment, C.; Labergerie, E.; Ferré, G.; Séchet, B.; Chédorge, H.; Cali, J.; Polidori, L.

    2013-07-01

    Terrestrial Laser Scanning (TLS), 3-D modeling and its Web visualization are the three key steps needed to perform storage and grant-free and wide access to cultural heritage, as highlighted in many recent examples. The goal of this study is to set up 3-D Web resources for "virtually" visiting the exterior of the Abbaye de l'Epau, an old French abbey which has both a rich history and delicate architecture. The virtuality is considered in two ways: the flowing navigation in a virtual reality environment around the abbey and a game activity using augmented reality. First of all, the data acquisition consists in GPS and tacheometry survey, terrestrial laser scanning and photography acquisition. After data pre-processing, the meshed and textured 3-D model is generated using 3-D Reshaper commercial software. The virtual reality visit and augmented reality animation are then created using Unity software. This work shows the interest of such tools in bringing out the regional cultural heritage and making it attractive to the public.

  13. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  14. A randomized, double-blind evaluation of D-cycloserine or alprazolam combined with virtual reality exposure therapy for posttraumatic stress disorder in Iraq and Afghanistan War veterans.

    PubMed

    Rothbaum, Barbara Olasov; Price, Matthew; Jovanovic, Tanja; Norrholm, Seth D; Gerardi, Maryrose; Dunlop, Boadie; Davis, Michael; Bradley, Bekh; Duncan, Erica J; Rizzo, Albert; Ressler, Kerry J

    2014-06-01

    The authors examined the effectiveness of virtual reality exposure augmented with D-cycloserine or alprazolam, compared with placebo, in reducing posttraumatic stress disorder (PTSD) due to military trauma. After an introductory session, five sessions of virtual reality exposure were augmented with D-cycloserine (50 mg) or alprazolam (0.25 mg) in a double-blind, placebo-controlled randomized clinical trial for 156 Iraq and Afghanistan war veterans with PTSD. PTSD symptoms significantly improved from pre- to posttreatment across all conditions and were maintained at 3, 6, and 12 months. There were no overall differences in symptoms between D-cycloserine and placebo at any time. Alprazolam and placebo differed significantly on the Clinician-Administered PTSD Scale score at posttreatment and PTSD diagnosis at 3 months posttreatment; the alprazolam group showed a higher rate of PTSD (82.8%) than the placebo group (47.8%). Between-session extinction learning was a treatment-specific enhancer of outcome for the D-cycloserine group only. At posttreatment, the D-cycloserine group had the lowest cortisol reactivity and smallest startle response during virtual reality scenes. A six-session virtual reality treatment was associated with reduction in PTSD diagnoses and symptoms in Iraq and Afghanistan veterans, although there was no control condition for the virtual reality exposure. There was no advantage of D-cycloserine for PTSD symptoms in primary analyses. In secondary analyses, alprazolam impaired recovery and D-cycloserine enhanced virtual reality outcome in patients who demonstrated within-session learning. D-cycloserine augmentation reduced cortisol and startle reactivity more than did alprazolam or placebo, findings that are consistent with those in the animal literature.

  15. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  16. Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)

    DTIC Science & Technology

    2017-01-01

    created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11

  17. Use of the stereoscopic virtual reality display system for the detection and characterization of intracranial aneurysms: A Icomparison with conventional computed tomography workstation and 3D rotational angiography.

    PubMed

    Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun

    2018-07-01

    This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  19. Virtual Reality Website of Indonesia National Monument and Its Environment

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.

    2017-02-01

    National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.

  20. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping.…

  1. Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts

    ERIC Educational Resources Information Center

    Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.

    2005-01-01

    The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…

  2. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  3. See-through 3D technology for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young

    2017-06-01

    Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.

  4. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    ERIC Educational Resources Information Center

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  5. ICCE/ICCAI 2000 Full & Short Papers (Virtual Reality in Education).

    ERIC Educational Resources Information Center

    2000

    This document contains the full text of the following full and short papers on virtual reality in education from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A CAL System for Appreciation of 3D Shapes by Surface Development (C3D-SD)" (Stephen C. F. Chan, Andy…

  6. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  7. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    PubMed

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  8. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  9. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez Anez, Francisco

    This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less

  11. Surviving sepsis--a 3D integrative educational simulator.

    PubMed

    Ježek, Filip; Tribula, Martin; Kulhánek, Tomáš; Mateják, Marek; Privitzer, Pavol; Šilar, Jan; Kofránek, Jiří; Lhotská, Lenka

    2015-08-01

    Computer technology offers greater educational possibilities, notably simulation and virtual reality. This paper presents a technology which serves to integrate multiple modalities, namely 3D virtual reality, node-based simulator, Physiomodel explorer and explanatory physiological simulators employing Modelica language and Unity3D platform. This emerging tool chain should allow the authors to concentrate more on educational content instead of application development. The technology is demonstrated through Surviving sepsis educational scenario, targeted on Microsoft Windows Store platform.

  12. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  13. Agreement and reliability of pelvic floor measurements during rest and on maximum Valsalva maneuver using three-dimensional translabial ultrasound and virtual reality imaging.

    PubMed

    Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B

    2016-08-01

    Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  14. Development of a Virtual Museum Including a 4d Presentation of Building History in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Tschirschwitz, F.; Deggim, S.

    2017-02-01

    In the last two decades the definition of the term "virtual museum" changed due to rapid technological developments. Using today's available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum "Alt-Segeberger Bürgerhaus", a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.

  15. Virtual Reality Enhanced Instructional Learning

    ERIC Educational Resources Information Center

    Nachimuthu, K.; Vijayakumari, G.

    2009-01-01

    Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…

  16. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  17. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  18. Three Primary School Students' Cognition about 3D Rotation in a Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Yeh, Andy

    2010-01-01

    This paper reports on three primary school students' explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students…

  19. Surgical approaches to complex vascular lesions: the use of virtual reality and stereoscopic analysis as a tool for resident and student education.

    PubMed

    Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J

    2012-08-01

    Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.

  20. Virtual reality, augmented reality, and robotics applied to digestive operative procedures: from in vivo animal preclinical studies to clinical use

    NASA Astrophysics Data System (ADS)

    Soler, Luc; Marescaux, Jacques

    2006-04-01

    Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.

  1. Virtual reality 3D echocardiography in the assessment of tricuspid valve function after surgical closure of ventricular septal defect.

    PubMed

    Bol Raap, Goris; Koning, Anton H J; Scohy, Thierry V; ten Harkel, A Derk-Jan; Meijboom, Folkert J; Kappetein, A Pieter; van der Spek, Peter J; Bogers, Ad J J C

    2007-02-16

    This study was done to investigate the potential additional role of virtual reality, using three-dimensional (3D) echocardiographic holograms, in the postoperative assessment of tricuspid valve function after surgical closure of ventricular septal defect (VSD). 12 data sets from intraoperative epicardial echocardiographic studies in 5 operations (patient age at operation 3 weeks to 4 years and bodyweight at operation 3.8 to 17.2 kg) after surgical closure of VSD were included in the study. The data sets were analysed as two-dimensional (2D) images on the screen of the ultrasound system as well as holograms in an I-space virtual reality (VR) system. The 2D images were assessed for tricuspid valve function. In the I-Space, a 6 degrees-of-freedom controller was used to create the necessary projectory positions and cutting planes in the hologram. The holograms were used for additional assessment of tricuspid valve leaflet mobility. All data sets could be used for 2D as well as holographic analysis. In all data sets the area of interest could be identified. The 2D analysis showed no tricuspid valve stenosis or regurgitation. Leaflet mobility was considered normal. In the virtual reality of the I-Space, all data sets allowed to assess the tricuspid leaflet level in a single holographic representation. In 3 holograms the septal leaflet showed restricted mobility that was not appreciated in the 2D echocardiogram. In 4 data sets the posterior leaflet and the tricuspid papillary apparatus were not completely included. This report shows that dynamic holographic imaging of intraoperative postoperative echocardiographic data regarding tricuspid valve function after VSD closure is feasible. Holographic analysis allows for additional tricuspid valve leaflet mobility analysis. The large size of the probe, in relation to small size of the patient, may preclude a complete data set. At the moment the requirement of an I-Space VR system limits the applicability in virtual reality 3D echocardiography in clinical practice.

  2. [The virtual reality simulation research of China Mechanical Virtual Human based on the Creator/Vega].

    PubMed

    Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng

    2010-10-01

    The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.

  3. Surgical planning for microsurgical excision of cerebral arterio-venous malformations using virtual reality technology.

    PubMed

    Ng, Ivan; Hwang, Peter Y K; Kumar, Dinesh; Lee, Cheng Kiang; Kockro, Ralf A; Sitoh, Y Y

    2009-05-01

    To evaluate the feasibility of surgical planning using a virtual reality platform workstation in the treatment of cerebral arterio-venous malformations (AVMs) Patient-specific data of multiple imaging modalities were co-registered, fused and displayed as a 3D stereoscopic object on the Dextroscope, a virtual reality surgical planning platform. This system allows for manipulation of 3D data and for the user to evaluate and appreciate the angio-architecture of the nidus with regards to position and spatial relationships of critical feeders and draining veins. We evaluated the ability of the Dextroscope to influence surgical planning by providing a better understanding of the angio-architecture as well as its impact on the surgeon's pre- and intra-operative confidence and ability to tackle these lesions. Twenty four patients were studied. The mean age was 29.65 years. Following pre-surgical planning on the Dextroscope, 23 patients underwent microsurgical resection after pre-surgical virtual reality planning, during which all had documented complete resection of the AVM. Planning on the virtual reality platform allowed for identification of critical feeders and draining vessels in all patients. The appreciation of the complex patient specific angio-architecture to establish a surgical plan was found to be invaluable in the conduct of the procedure and was found to enhance the surgeon's confidence significantly. Surgical planning of resection of an AVM with a virtual reality system allowed detailed and comprehensive analysis of 3D multi-modality imaging data and, in our experience, proved very helpful in establishing a good surgical strategy, enhancing intra-operative spatial orientation and increasing surgeon's confidence.

  4. From stereoscopic recording to virtual reality headsets: Designing a new way to learn surgery.

    PubMed

    Ros, M; Trives, J-V; Lonjon, N

    2017-03-01

    To improve surgical practice, there are several different approaches to simulation. Due to wearable technologies, recording 3D movies is now easy. The development of a virtual reality headset allows imagining a different way of watching these videos: using dedicated software to increase interactivity in a 3D immersive experience. The objective was to record 3D movies via a main surgeon's perspective, to watch files using virtual reality headsets and to validate pedagogic interest. Surgical procedures were recorded using a system combining two side-by-side cameras placed on a helmet. We added two LEDs just below the cameras to enhance luminosity. Two files were obtained in mp4 format and edited using dedicated software to create 3D movies. Files obtained were then played using a virtual reality headset. Surgeons who tried the immersive experience completed a questionnaire to evaluate the interest of this procedure for surgical learning. Twenty surgical procedures were recorded. The movies capture a scene which is extended 180° horizontally and 90° vertically. The immersive experience created by the device conveys a genuine feeling of being in the operating room and seeing the procedure first-hand through the eyes of the main surgeon. All surgeons indicated that they believe in pedagogical interest of this method. We succeeded in recording the main surgeon's point of view in 3D and watch it on a virtual reality headset. This new approach enhances the understanding of surgery; most of the surgeons appreciated its pedagogic value. This method could be an effective learning tool in the future. Copyright © 2016. Published by Elsevier Masson SAS.

  5. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  6. Innovation Education Enabled through a Collaborative Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom; Lehtonen, Miika; Ha, Joong Gyu

    2006-01-01

    This article provides a descriptive account of the development of an approach to the support of design and technology education with 3D Virtual Reality (VR) technologies on an open and distance learning basis. This work promotes an understanding of the implications and possibilities of advanced virtual learning technologies in education for…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markidis, S.; Rizwan, U.

    The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less

  8. Minimally invasive superficial temporal artery to middle cerebral artery bypass through a minicraniotomy: benefit of three-dimensional virtual reality planning using magnetic resonance angiography.

    PubMed

    Fischer, Gerrit; Stadie, Axel; Schwandt, Eike; Gawehn, Joachim; Boor, Stephan; Marx, Juergen; Oertel, Joachim

    2009-05-01

    The aim of the authors in this study was to introduce a minimally invasive superficial temporal artery to middle cerebral artery (STA-MCA) bypass surgery by the preselection of appropriate donor and recipient branches in a 3D virtual reality setting based on 3-T MR angiography data. An STA-MCA anastomosis was performed in each of 5 patients. Before surgery, 3-T MR imaging was performed with 3D magnetization-prepared rapid acquisition gradient echo sequences, and a high-resolution CT 3D dataset was obtained. Image fusion and the construction of a 3D virtual reality model of each patient were completed. In the 3D virtual reality setting, the skin surface, skull surface, and extra- and intracranial arteries as well as the cortical brain surface could be displayed in detail. The surgical approach was successfully visualized in virtual reality. The anatomical relationship of structures of interest could be evaluated based on different values of translucency in all cases. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be calculated with high accuracy preoperatively and determined as the center point of the following minicraniotomy. Localization of the craniotomy and the skin incision on top of the STA branch was calculated with the system, and these data were transferred onto the patient's skin before surgery. In all cases the preselected arteries could be found intraoperatively in exact agreement with the preoperative planning data. Successful extracranial-intracranial bypass surgery was achieved without stereotactic neuronavigation via a preselected minimally invasive approach in all cases. Subsequent enlargement of the craniotomy was not necessary. Perioperative complications were not observed. All bypasses remained patent on follow-up. With the application of a 3D virtual reality planning system, the extent of skin incision and tissue trauma as well as the size of the bone flap was minimal. The closest point of the appropriate donor branch of the STA and the most suitable recipient M(3) or M(4) segment could be preoperatively determined with high accuracy so that the STA-MCA bypass could be safely and effectively performed through an optimally located minicraniotomy with a mean diameter of 22 mm without the need for stereotactic guidance.

  9. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  10. Virtual reality system for planning minimally invasive neurosurgery. Technical note.

    PubMed

    Stadie, Axel Thomas; Kockro, Ralf Alfons; Reisch, Robert; Tropine, Andrei; Boor, Stephan; Stoeter, Peter; Perneczky, Axel

    2008-02-01

    The authors report on their experience with a 3D virtual reality system for planning minimally invasive neurosurgical procedures. Between October 2002 and April 2006, the authors used the Dextroscope (Volume Interactions, Ltd.) to plan neurosurgical procedures in 106 patients, including 100 with intracranial and 6 with spinal lesions. The planning was performed 1 to 3 days preoperatively, and in 12 cases, 3D prints of the planning procedure were taken into the operating room. A questionnaire was completed by the neurosurgeon after the planning procedure. After a short period of acclimatization, the system proved easy to operate and is currently used routinely for preoperative planning of difficult cases at the authors' institution. It was felt that working with a virtual reality multimodal model of the patient significantly improved surgical planning. The pathoanatomy in individual patients could easily be understood in great detail, enabling the authors to determine the surgical trajectory precisely and in the most minimally invasive way. The authors found the preoperative 3D model to be in high concordance with intraoperative conditions; the resulting intraoperative "déjà-vu" feeling enhanced surgical confidence. In all procedures planned with the Dextroscope, the chosen surgical strategy proved to be the correct choice. Three-dimensional virtual reality models of a patient allow quick and easy understanding of complex intracranial lesions.

  11. A New Approach to Improve Cognition, Muscle Strength, and Postural Balance in Community-Dwelling Elderly with a 3-D Virtual Reality Kayak Program.

    PubMed

    Park, Junhyuck; Yim, JongEun

    2016-01-01

    Aging is usually accompanied with deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity. Recently, intervention methods with virtual reality have been introduced, providing an enjoyable therapy for elderly. The aim of this study was to investigate whether a 3-D virtual reality kayak program could improve the cognitive function, muscle strength, and balance of community-dwelling elderly. Importantly, kayaking involves most of the upper body musculature and needs the balance control. Seventy-two participants were randomly allocated into the kayak program group (n = 36) and the control group (n = 36). The two groups were well matched with respect to general characteristics at baseline. The participants in both groups performed a conventional exercise program for 30 min, and then the 3-D virtual reality kayak program was performed in the kayak program group for 20 min, two times a week for 6 weeks. Cognitive function was measured using the Montreal Cognitive Assessment. Muscle strength was measured using the arm curl and handgrip strength tests. Standing and sitting balance was measured using the Good Balance system. The post-test was performed in the same manner as the pre-test; the overall outcomes such as cognitive function (p < 0.05), muscle strength (p < 0.05), and balance (standing and sitting balance, p < 0.05) were significantly improved in kayak program group compared to the control group. We propose that the 3-D virtual reality kayak program is a promising intervention method for improving the cognitive function, muscle strength, and balance of elderly.

  12. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  13. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    NASA Astrophysics Data System (ADS)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  14. Web3D Technologies in Learning, Education and Training: Motivations, Issues, Opportunities

    ERIC Educational Resources Information Center

    Chittaro, Luca; Ranon, Roberto

    2007-01-01

    Web3D open standards allow the delivery of interactive 3D virtual learning environments through the Internet, reaching potentially large numbers of learners worldwide, at any time. This paper introduces the educational use of virtual reality based on Web3D technologies. After briefly presenting the main Web3D technologies, we summarize the…

  15. Virtual reality and 3D visualizations in heart surgery education.

    PubMed

    Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver

    2002-01-01

    Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.

  16. A Virtual Campus Based on Human Factor Engineering

    ERIC Educational Resources Information Center

    Yang, Yuting; Kang, Houliang

    2014-01-01

    Three Dimensional or 3D virtual reality has become increasingly popular in many areas, especially in building a digital campus. This paper introduces a virtual campus, which is based on a 3D model of The Tourism and Culture College of Yunnan University (TCYU). Production of the virtual campus was aided by Human Factor and Ergonomics (HF&E), an…

  17. Mobile Virtual Reality : A Solution for Big Data Visualization

    NASA Astrophysics Data System (ADS)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.

  18. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  19. Photographer: Digital Telepresence: Dr Murial Ross's Virtual Reality Application for Neuroscience

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Photographer: Digital Telepresence: Dr Murial Ross's Virtual Reality Application for Neuroscience Research Biocomputation. To study human disorders of balance and space motion sickness. Shown here is a 3D reconstruction of a nerve ending in inner ear, nature's wiring of balance organs.

  20. Optoelectronics technologies for Virtual Reality systems

    NASA Astrophysics Data System (ADS)

    Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław

    2017-08-01

    Solutions in the field of virtual reality are very strongly associated with optoelectronic technologies. This applies to both process design and operation of VR applications. Technologies such as 360 cameras and 3D scanners significantly improve the design work. What is more, HMD displays with high field of view or optoelectronic Motion Capture systems and 3D cameras guarantee an extraordinary experience in immersive VR applications. This article reviews selected technologies from the perspective of their use in a broadly defined process of creating and implementing solutions for virtual reality. There is also the ability to create, modify and adapt new approaches that show team own work (SteamVR tracker). Most of the introduced examples are effectively used by authors to create different VR applications. The use of optoelectronic technology in virtual reality is presented in terms of design and operation of the system as well as referring to specific applications. Designers and users of VR systems should take a close look on new optoelectronics solutions, as they can significantly contribute to increased work efficiency and offer completely new opportunities for virtual world reception.

  1. Transforming Clinical Imaging Data for Virtual Reality Learning Objects

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Rosset, Antoine

    2008-01-01

    Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…

  2. PC-Based Virtual Reality for CAD Model Viewing

    ERIC Educational Resources Information Center

    Seth, Abhishek; Smith, Shana S.-F.

    2004-01-01

    Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…

  3. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  4. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  5. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  6. Pursuit of X-ray Vision for Augmented Reality

    DTIC Science & Technology

    2012-01-01

    applications. Virtual Reality 15(2–3), 175–184 (2011) 29. Livingston, M.A., Swan II, J.E., Gabbard , J.L., Höllerer, T.H., Hix, D., Julier, S.J., Baillot, Y...Brown, D., Baillot, Y., Gabbard , J.L., Hix, D.: A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: IEEE

  7. Anatomical education and surgical simulation based on the Chinese Visible Human: a three-dimensional virtual model of the larynx region.

    PubMed

    Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang

    2013-09-01

    Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.

  8. Serious games for screening pre-dementia conditions: from virtuality to reality? A pilot project.

    PubMed

    Zucchella, Chiara; Sinforiani, Elena; Tassorelli, Cristina; Cavallini, Elena; Tost-Pardell, Daniela; Grau, Sergi; Pazzi, Stefania; Puricelli, Stefano; Bernini, Sara; Bottiroli, Sara; Vecchi, Tomaso; Sandrini, Giorgio; Nappi, Giuseppe

    2014-01-01

    Conventional cognitive assessment is based on a pencil-and-paper neuropsychological evaluation, which is time consuming, expensive and requires the involvement of several professionals. Information and communication technology could be exploited to allow the development of tools that are easy to use, reduce the amount of data processing, and provide controllable test conditions. Serious games (SGs) have the potential to be new and effective tools in the management and treatment of cognitive impairments Serious games for screening pre-dementia conditions: from virtuality to reality? A pilot project in the elderly. Moreover, by adopting SGs in 3D virtual reality settings, cognitive functions might be evaluated using tasks that simulate daily activities, increasing the "ecological validity" of the assessment. In this commentary we report our experience in the creation of the Smart Aging platform, a 3D SGand virtual environment-based platform for the early identification and characterization of mild cognitive impairment.

  9. New Desktop Virtual Reality Technology in Technical Education

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.

    2008-01-01

    Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…

  10. Applied virtual reality at the Research Triangle Institute

    NASA Technical Reports Server (NTRS)

    Montoya, R. Jorge

    1994-01-01

    Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.

  11. Brave New (Interactive) Worlds: A Review of the Design Affordances and Constraints of Two 3D Virtual Worlds as Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2005-01-01

    Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe…

  12. 3D virtual environment of Taman Mini Indonesia Indah in a web

    NASA Astrophysics Data System (ADS)

    Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.

    2018-05-01

    Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.

  13. RealityConvert: a tool for preparing 3D models of biochemical structures for augmented and virtual reality.

    PubMed

    Borrel, Alexandre; Fourches, Denis

    2017-12-01

    There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. Determining sensitivity/specificity of virtual reality-based neuropsychological tool for detecting residual abnormalities following sport-related concussion.

    PubMed

    Teel, Elizabeth; Gay, Michael; Johnson, Brian; Slobounov, Semyon

    2016-05-01

    Computer-based neuropsychological (NP) evaluation is an effective clinical tool used to assess cognitive function which complements the clinical diagnosis of a concussion. However, some researchers and clinicians argue its lack of ecological validity places limitations on externalizing results to a sensory rich athletic environment. Virtual reality-based NP assessment offers clinical advantages using an immersive environment and evaluating domains not typically assessed by traditional NP assessments. The sensitivity and specificity of detecting lingering cognitive abnormalities was examined on components of a virtual reality-based NP assessment battery to cohort affiliation (concussed vs. controls). Data were retrospectively gathered on 128 controls (no concussion) and 24 concussed college-age athletes on measures of spatial navigation, whole body reaction, attention, and balance in a virtual environment. Concussed athletes were tested within 10 days (M = 8.33, SD = 1.06) of concussion and were clinically asymptomatic at the time of testing. A priori alpha level was set at 0.05 for all tests. Spatial navigation (sensitivity 95.8%/specificity 91.4%, d = 1.89), whole body reaction time (sensitivity 95.2%/specificity 89.1%, d = 1.50) and combined virtual reality modules (sensitivity 95.8%,/specificity 96.1%, d = 3.59) produced high sensitivity/specificity values when determining performance-based variability between groups. Use of a virtual reality-based NP platform can detect lingering cognitive abnormalities resulting from concussion in clinically asymptomatic participants. Virtual reality NP platforms may compliment the traditional concussion assessment battery by providing novel information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    NASA Astrophysics Data System (ADS)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  16. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter.

    PubMed

    Tsapakis, Stylianos; Papaconstantinou, Dimitrios; Diagourtas, Andreas; Droutsas, Konstantinos; Andreanos, Konstantinos; Moschos, Marilita M; Brouzas, Dimitrios

    2017-01-01

    To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter. Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points) were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter. High correlation coefficient ( r =0.808, P <0.0001) was found between the virtual reality visual field test and the Humphrey perimeter visual field. Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use.

  17. Virtual Reality Model of the Three-Dimensional Anatomy of the Cavernous Sinus Based on a Cadaveric Image and Dissection.

    PubMed

    Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke

    2018-01-01

    Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.

  18. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  19. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  20. Authoring Adaptive 3D Virtual Learning Environments

    ERIC Educational Resources Information Center

    Ewais, Ahmed; De Troyer, Olga

    2014-01-01

    The use of 3D and Virtual Reality is gaining interest in the context of academic discussions on E-learning technologies. However, the use of 3D for learning environments also has drawbacks. One way to overcome these drawbacks is by having an adaptive learning environment, i.e., an environment that dynamically adapts to the learner and the…

  1. Immersive Education, an Annotated Webliography

    ERIC Educational Resources Information Center

    Pricer, Wayne F.

    2011-01-01

    In this second installment of a two-part feature on immersive education a webliography will provide resources discussing the use of various types of computer simulations including: (a) augmented reality, (b) virtual reality programs, (c) gaming resources for teaching with technology, (d) virtual reality lab resources, (e) virtual reality standards…

  2. Recent advances in head-mounted light field displays for virtual and augmented reality (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hua, Hong

    2017-02-01

    Head-mounted light field displays render a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. They are capable of rendering correct or nearly correct focus cues and addressing the very well-known vergence-accommodation mismatch problem in conventional virtual and augmented reality displays. In this talk, I will focus on reviewing recent advancements of head-mounted light field displays for VR and AR applications. I will demonstrate examples of HMD systems developed in my group.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, Birchard P; Michel, Kelly D; Few, Douglas A

    From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less

  4. A rapid algorithm for realistic human reaching and its use in a virtual reality system

    NASA Technical Reports Server (NTRS)

    Aldridge, Ann; Pandya, Abhilash; Goldsby, Michael; Maida, James

    1994-01-01

    The Graphics Analysis Facility (GRAF) at JSC has developed a rapid algorithm for computing realistic human reaching. The algorithm was applied to GRAF's anthropometrically correct human model and used in a 3D computer graphics system and a virtual reality system. The nature of the algorithm and its uses are discussed.

  5. Linking Audio and Visual Information while Navigating in a Virtual Reality Kiosk Display

    ERIC Educational Resources Information Center

    Sullivan, Briana; Ware, Colin; Plumlee, Matthew

    2006-01-01

    3D interactive virtual reality museum exhibits should be easy to use, entertaining, and informative. If the interface is intuitive, it will allow the user more time to learn the educational content of the exhibit. This research deals with interface issues concerning activating audio descriptions of images in such exhibits while the user is…

  6. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  7. Magic cards: a new augmented-reality approach.

    PubMed

    Demuynck, Olivier; Menendez, José Manuel

    2013-01-01

    Augmented reality (AR) commonly uses markers for detection and tracking. Such multimedia applications associate each marker with a virtual 3D model stored in the memory of the camera-equipped device running the application. Application users are limited in their interactions, which require knowing how to design and program 3D objects. This generally prevents them from developing their own entertainment AR applications. The Magic Cards application solves this problem by offering an easy way to create and manage an unlimited number of virtual objects that are encoded on special markers.

  8. Innovative application of virtual display technique in virtual museum

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  9. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve

    2011-03-01

    Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.

  10. True 3D digital holographic tomography for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Downham, A.; Abeywickrema, U.; Banerjee, P. P.

    2017-09-01

    Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.

  11. The WINCKELMANN300 Project: Dissemination of Culture with Virtual Reality at the Capitoline Museum in Rome

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Malatesta, S. G.; Lella, F.; Fanini, B.; Sala, F.; Dodero, E.; Petacco, L.

    2018-05-01

    The best way to disseminate culture is, nowadays, the creation of scenarios with virtual and augmented reality that supply the visitors of museums with a powerful, interactive tool that allows to learn sometimes difficult concepts in an easy, entertaining way. 3D models derived from reality-based techniques are nowadays used to preserve, document and restore historical artefacts. These digital contents are also powerful instrument to interactively communicate their significance to non-specialist, making easier to understand concepts sometimes complicated or not clear. Virtual and Augmented Reality are surely a valid tool to interact with 3D models and a fundamental help in making culture more accessible to the wide public. These technologies can help the museum curators to adapt the cultural proposal and the information about the artefacts based on the different type of visitor's categories. These technologies allow visitors to travel through space and time and have a great educative function permitting to explain in an easy and attractive way information and concepts that could prove to be complicated. The aim of this paper is to create a virtual scenario and an augmented reality app to recreate specific spaces in the Capitoline Museum in Rome as they were during Winckelmann's time, placing specific statues in their original position in the 18th century.

  12. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  13. Molecular Rift: Virtual Reality for Drug Designers.

    PubMed

    Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas

    2015-11-23

    Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.

  14. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  15. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  16. Using Virtual Reality For Outreach Purposes in Planetology

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane; Le Menn, Erwan; Beaunay, Stéphanie

    2016-10-01

    2016 has been a year marked by a technological breakthrough : the availability for the first time to the general public of technologically mature virtual reality devices. Virtual Reality consists in visually immerging a user in a 3D environment reproduced either from real and/or imaginary data, with the possibility to move and eventually interact with the different elements. In planetology, most of the places will remain inaccessible to the public for a while, but a fleet of dedicated spacecraft's such as orbiters, landers and rovers allow the possibility to virtually reconstruct the environments, using image processing, cartography and photogrammetry. Virtual reality can then bridge the gap to virtually "send" any user into the place and enjoy the exploration.We are investigating several type of devices to render orbital or ground based data of planetological interest, mostly from Mars. The most simple system consists of a "cardboard" headset, on which the user can simply use his cellphone as the screen. A more comfortable experience is obtained with more complex systems such as the HTC vive or Oculus Rift headsets, which include a tracking system important to minimize motion sickness. The third environment that we have developed is based on the CAVE concept, were four 3D video projectors are used to project on three 2x3m walls plus the ground. These systems can be used for scientific data analysis, but also prove to be perfectly suited for outreach and education purposes.

  17. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  18. The electronic-commerce-oriented virtual merchandise model

    NASA Astrophysics Data System (ADS)

    Fang, Xiaocui; Lu, Dongming

    2004-03-01

    Electronic commerce has been the trend of commerce activities. Providing with Virtual Reality interface, electronic commerce has better expressing capacity and interaction means. But most of the applications of virtual reality technology in EC, 3D model is only the appearance description of merchandises. There is almost no information concerned with commerce information and interaction information. This resulted in disjunction of virtual model and commerce information. So we present Electronic Commerce oriented Virtual Merchandise Model (ECVMM), which combined a model with commerce information, interaction information and figure information of virtual merchandise. ECVMM with abundant information provides better support to information obtainment and communication in electronic commerce.

  19. Meditation experts try Virtual Reality Mindfulness: A pilot study evaluation of the feasibility and acceptability of Virtual Reality to facilitate mindfulness practice in people attending a Mindfulness conference.

    PubMed

    Navarro-Haro, María V; López-Del-Hoyo, Yolanda; Campos, Daniel; Linehan, Marsha M; Hoffman, Hunter G; García-Palacios, Azucena; Modrego-Alarcón, Marta; Borao, Luis; García-Campayo, Javier

    2017-01-01

    Regular mindfulness practice benefits people both mentally and physically, but many populations who could benefit do not practice mindfulness. Virtual Reality (VR) is a new technology that helps capture participants' attention and gives users the illusion of "being there" in the 3D computer generated environment, facilitating sense of presence. By limiting distractions from the real world, increasing sense of presence and giving people an interesting place to go to practice mindfulness, Virtual Reality may facilitate mindfulness practice. Traditional Dialectical Behavioral Therapy (DBT®) mindfulness skills training was specifically designed for clinical treatment of people who have trouble focusing attention, however severe patients often show difficulties or lack of motivation to practice mindfulness during the training. The present pilot study explored whether a sample of mindfulness experts would find useful and recommend a new VR Dialectical Behavioral Therapy (DBT®) mindfulness skills training technique and whether they would show any benefit. Forty four participants attending a mindfulness conference put on an Oculus Rift DK2 Virtual Reality helmet and floated down a calm 3D computer generated virtual river while listening to digitized DBT® mindfulness skills training instructions. On subjective questionnaires completed by the participants before and after the VR DBT® mindfulness skills training session, participants reported increases/improvements in state of mindfulness, and reductions in negative emotional states. After VR, participants reported significantly less sadness, anger, and anxiety, and reported being significantly more relaxed. Participants reported a moderate to strong illusion of going inside the 3D computer generated world (i.e., moderate to high "presence" in VR) and showed high acceptance of VR as a technique to practice mindfulness. These results show encouraging preliminary evidence of the feasibility and acceptability of using VR to practice mindfulness based on clinical expert feedback. VR is a technology with potential to increase computerized dissemination of DBT® skills training modules. Future research is warranted.

  20. Meditation experts try Virtual Reality Mindfulness: A pilot study evaluation of the feasibility and acceptability of Virtual Reality to facilitate mindfulness practice in people attending a Mindfulness conference.

    PubMed Central

    Navarro-Haro, María V.; López-del-Hoyo, Yolanda; Campos, Daniel; Linehan, Marsha M.; Hoffman, Hunter G.; García-Palacios, Azucena; Modrego-Alarcón, Marta; Borao, Luis; García-Campayo, Javier

    2017-01-01

    Regular mindfulness practice benefits people both mentally and physically, but many populations who could benefit do not practice mindfulness. Virtual Reality (VR) is a new technology that helps capture participants’ attention and gives users the illusion of “being there” in the 3D computer generated environment, facilitating sense of presence. By limiting distractions from the real world, increasing sense of presence and giving people an interesting place to go to practice mindfulness, Virtual Reality may facilitate mindfulness practice. Traditional Dialectical Behavioral Therapy (DBT®) mindfulness skills training was specifically designed for clinical treatment of people who have trouble focusing attention, however severe patients often show difficulties or lack of motivation to practice mindfulness during the training. The present pilot study explored whether a sample of mindfulness experts would find useful and recommend a new VR Dialectical Behavioral Therapy (DBT®) mindfulness skills training technique and whether they would show any benefit. Forty four participants attending a mindfulness conference put on an Oculus Rift DK2 Virtual Reality helmet and floated down a calm 3D computer generated virtual river while listening to digitized DBT® mindfulness skills training instructions. On subjective questionnaires completed by the participants before and after the VR DBT® mindfulness skills training session, participants reported increases/improvements in state of mindfulness, and reductions in negative emotional states. After VR, participants reported significantly less sadness, anger, and anxiety, and reported being significantly more relaxed. Participants reported a moderate to strong illusion of going inside the 3D computer generated world (i.e., moderate to high “presence” in VR) and showed high acceptance of VR as a technique to practice mindfulness. These results show encouraging preliminary evidence of the feasibility and acceptability of using VR to practice mindfulness based on clinical expert feedback. VR is a technology with potential to increase computerized dissemination of DBT® skills training modules. Future research is warranted. PMID:29166665

  1. Armagh Observatory - Historic Building Information Modelling for Virtual Learning in Building Conservation

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Chenaux, A.; Keenaghan, G.; GIbson, V..; Butler, J.; Pybusr, C.

    2017-08-01

    In this paper the recording and design for a Virtual Reality Immersive Model of Armagh Observatory is presented, which will replicate the historic buildings and landscape with distant meridian markers and position of its principal historic instruments within a model of the night sky showing the position of bright stars. The virtual reality model can be used for educational purposes allowing the instruments within the historic building model to be manipulated within 3D space to demonstrate how the position measurements of stars were made in the 18th century. A description is given of current student and researchers activities concerning on-site recording and surveying and the virtual modelling of the buildings and landscape. This is followed by a design for a Virtual Reality Immersive Model of Armagh Observatory use game engine and virtual learning platforms and concepts.

  2. An Intelligent Virtual Human System For Providing Healthcare Information And Support

    DTIC Science & Technology

    2011-01-01

    for clinical purposes. Shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality ...College; dMadigan Army Medical Center Army Abstract. Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality ... Virtual Reality with the “birth” of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive

  3. A computer-based training system combining virtual reality and multimedia

    NASA Technical Reports Server (NTRS)

    Stansfield, Sharon A.

    1993-01-01

    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.

  4. A 3D virtual reality simulator for training of minimally invasive surgery.

    PubMed

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  5. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML).

    PubMed

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-07-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.

  6. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    PubMed Central

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760

  7. Virtual reality welder training

    NASA Astrophysics Data System (ADS)

    White, Steven A.; Reiners, Dirk; Prachyabrued, Mores; Borst, Christoph W.; Chambers, Terrence L.

    2010-01-01

    This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.

  8. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  9. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  10. French Military Applications of Virtual Reality

    DTIC Science & Technology

    2000-11-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10631 TITLE: French Military Applications of Virtual Reality...numbers comprise the compilation report: ADPO10609 thru ADP010633 UNCLASSIFIED 23-1 FRENCH MILITARY APPLICATIONS OF VIRTUAL REALITY Jean Paul Papin* and...Pascal Hue DGA/DCE/ETC4/ETAS Etablissement Technique d’ Angers BP 36 49460 MONTREUIL JUIGNE, France INTRODUCTION France is now applying virtual

  11. Implementing Virtual Reality Technology as an Effective Web Based Kiosk: Darulaman's Teacher Training College Tour (Ipda Vr Tour)

    ERIC Educational Resources Information Center

    Fadzil, Azman

    2006-01-01

    At present, the development of Virtual Reality (VR) technology is expanding due to the importance and needs to use the 3D elements and 360 degrees panorama in expressing a clearer picture to consumers in various fields such as education, military, medicine, entertainment and so on. The web based VR kiosk project in Darulaman's Teacher Training…

  12. A Head in Virtual Reality: Development of A Dynamic Head and Neck Model

    ERIC Educational Resources Information Center

    Nguyen, Ngan; Wilson, Timothy D.

    2009-01-01

    Advances in computer and interface technologies have made it possible to create three-dimensional (3D) computerized models of anatomical structures for visualization, manipulation, and interaction in a virtual 3D environment. In the past few decades, a multitude of digital models have been developed to facilitate complex spatial learning of the…

  13. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  14. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  15. PTSD in Limb Trauma and Recovery

    DTIC Science & Technology

    2011-10-01

    Virtual reality and Motion Analysis to Characterize Disabilities in Lower...Program 4: “ Virtual reality and Motion Analysis to Characterize Disabilities in Lower Limb Injury” (Christopher Rhea, Ph.D., lead investigator). This...ANSI Std. Z39.18 ANNUAL REPORT 10/16/2011 VIRTUAL REALITY AND MOTION ANALYSIS TO CHARACTERIZE DISABILITIES IN LOWER LIMB INJURY PI: SUSAN

  16. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  17. Virtual reality as a tool for improving spatial rotation among deaf and hard-of-hearing children.

    PubMed

    Passig, D; Eden, S

    2001-12-01

    The aim of this study was to investigate whether the practice of rotating Virtual Reality (VR) three-dimensional (3D) objects will enhance the spatial rotation thinking of deaf and hard-of-hearing children compared to the practice of rotating two-dimensional (2D) objects. Two groups were involved in this study: an experimental group, which included 21 deaf and hardof-hearing children, who played a VR 3D game, and a control group of 23 deaf and hard-of-hearing children, who played a similar 2D (not VR) game. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved the children's performance of spatial rotation, which enhanced their ability to perform better in other intellectual skills as well as in their sign language skills.

  18. Virtual Reality as an Educational and Training Tool for Medicine.

    PubMed

    Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo

    2018-02-01

    Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.

  19. Implementing Virtual Reality Technology as an Effective WEB Based KIOSK: Darulaman's Teacher Training College Tour (IPDA VR Tour)

    ERIC Educational Resources Information Center

    Azman, Fadzil

    2004-01-01

    At present the development of Virtual Reality (VR) technology is expanding due to the importance and needs to use the 3D elements and 360 degrees panorama. In expressing a clearer picture to consumers in various fields such as education, military, medicine, entertainment and so on. In live with the development the web based VR kiosk project in…

  20. Web-based interactive 3D visualization as a tool for improved anatomy learning.

    PubMed

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.

  1. Virtual reality anatomy: is it comparable with traditional methods in the teaching of human forearm musculoskeletal anatomy?

    PubMed

    Codd, Anthony M; Choudhury, Bipasha

    2011-01-01

    The use of cadavers to teach anatomy is well established, but limitations with this approach have led to the introduction of alternative teaching methods. One such method is the use of three-dimensional virtual reality computer models. An interactive, three-dimensional computer model of human forearm anterior compartment musculoskeletal anatomy was produced using the open source 3D imaging program "Blender." The aim was to evaluate the use of 3D virtual reality when compared with traditional anatomy teaching methods. Three groups were identified from the University of Manchester second year Human Anatomy Research Skills Module class: a "control" group (no prior knowledge of forearm anatomy), a "traditional methods" group (taught using dissection and textbooks), and a "model" group (taught solely using e-resource). The groups were assessed on anatomy of the forearm by a ten question practical examination. ANOVA analysis showed the model group mean test score to be significantly higher than the control group (mean 7.25 vs. 1.46, P < 0.001) and not significantly different to the traditional methods group (mean 6.87, P > 0.5). Feedback from all users of the e-resource was positive. Virtual reality anatomy learning can be used to compliment traditional teaching methods effectively. Copyright © 2011 American Association of Anatomists.

  2. Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.

    1999-01-01

    We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.

  3. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    NASA Astrophysics Data System (ADS)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  4. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills

    NASA Astrophysics Data System (ADS)

    Choi, Kup-Sze

    This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.

  5. Virtual Reality and Learning: Where Is the Pedagogy?

    ERIC Educational Resources Information Center

    Fowler, Chris

    2015-01-01

    The aim of this paper was to build upon Dalgarno and Lee's model or framework of learning in three-dimensional (3-D) virtual learning environments (VLEs) and to extend their road map for further research in this area. The enhanced model shares the common goal with Dalgarno and Lee of identifying the learning benefits from using 3-D VLEs. The…

  6. Stereopsis, Visuospatial Ability, and Virtual Reality in Anatomy Learning

    PubMed Central

    Vorstenbosch, Marc; Kooloos, Jan

    2017-01-01

    A new wave of virtual reality headsets has become available. A potential benefit for the study of human anatomy is the reintroduction of stereopsis and absolute size. We report a randomized controlled trial to assess the contribution of stereopsis to anatomy learning, for students of different visuospatial ability. Sixty-three participants engaged in a one-hour session including a study phase and posttest. One group studied 3D models of the anatomy of the deep neck in full stereoptic virtual reality; one group studied those structures in virtual reality without stereoptic depth. The control group experienced an unrelated virtual reality environment. A post hoc questionnaire explored cognitive load and problem solving strategies of the participants. We found no effect of condition on learning. Visuospatial ability however did impact correct answers at F(1) = 5.63 and p = .02. No evidence was found for an impact of cognitive load on performance. Possibly, participants were able to solve the posttest items based on visuospatial information contained in the test items themselves. Additionally, the virtual anatomy may have been complex enough to discourage memory based strategies. It is important to control the amount of visuospatial information present in test items. PMID:28656109

  7. Stereopsis, Visuospatial Ability, and Virtual Reality in Anatomy Learning.

    PubMed

    Luursema, Jan-Maarten; Vorstenbosch, Marc; Kooloos, Jan

    2017-01-01

    A new wave of virtual reality headsets has become available. A potential benefit for the study of human anatomy is the reintroduction of stereopsis and absolute size. We report a randomized controlled trial to assess the contribution of stereopsis to anatomy learning, for students of different visuospatial ability. Sixty-three participants engaged in a one-hour session including a study phase and posttest. One group studied 3D models of the anatomy of the deep neck in full stereoptic virtual reality; one group studied those structures in virtual reality without stereoptic depth. The control group experienced an unrelated virtual reality environment. A post hoc questionnaire explored cognitive load and problem solving strategies of the participants. We found no effect of condition on learning. Visuospatial ability however did impact correct answers at F (1) = 5.63 and p = .02. No evidence was found for an impact of cognitive load on performance. Possibly, participants were able to solve the posttest items based on visuospatial information contained in the test items themselves. Additionally, the virtual anatomy may have been complex enough to discourage memory based strategies. It is important to control the amount of visuospatial information present in test items.

  8. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  9. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  10. Virtual reality 3D headset based on DMD light modulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  11. Virtual reality visualization algorithms for the ALICE high energy physics experiment on the LHC at CERN

    NASA Astrophysics Data System (ADS)

    Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław

    2017-08-01

    Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.

  12. A novel augmented reality system of image projection for image-guided neurosurgery.

    PubMed

    Mahvash, Mehran; Besharati Tabrizi, Leila

    2013-05-01

    Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.

  13. HTC Vive MeVisLab integration via OpenVR for medical applications

    PubMed Central

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection. PMID:28323840

  14. HTC Vive MeVisLab integration via OpenVR for medical applications.

    PubMed

    Egger, Jan; Gall, Markus; Wallner, Jürgen; Boechat, Pedro; Hann, Alexander; Li, Xing; Chen, Xiaojun; Schmalstieg, Dieter

    2017-01-01

    Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.

  15. Immersive Earth Science: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  16. Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens.

    PubMed

    Shen, Xin; Javidi, Bahram

    2018-03-01

    We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.

  17. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  18. Can a virtual supermarket bring realism into the lab? Comparing shopping behavior using virtual and pictorial store representations to behavior in a physical store.

    PubMed

    van Herpen, Erica; van den Broek, Eva; van Trijp, Hans C M; Yu, Tian

    2016-12-01

    Immersive virtual reality techniques present new opportunities for research into consumer behavior. The current study examines whether the increased realism of a virtual store compared to pictorial (2D) stimuli elicits consumer behavior that is more in line with behavior in a physical store. We examine the number, variety, and type of products selected, amount of money spent, and responses to price promotions and shelf display, in three product categories (fruit & vegetables, milk, and biscuits). We find that virtual reality elicits behavior that is more similar to behavior in the physical store compared to the picture condition for the number of products selected (Milk: M store  = 1.19, M virtual  = 1.53, M pictures  = 2.58) and amount of money spent (Milk: M store  = 1.27, M virtual  = 1.53, M pictures  = 2.60 Euro), and for the selection of products from different areas of the shelf, both vertically (purchases from top shelves, milk and biscuits: P store  = 21.6%, P virtual  = 33.4%, P pictures  = 50.0%) and horizontally (purchase from left shelf, biscuits: P store  = 35.5%, P virtual  = 53.3%, P pictures  = 66.7%). This indicates that virtual reality can improve realism in responses to shelf allocation. Virtual reality was not able to diminish other differences between lab and physical store: participants bought more products and spent more money (for biscuits and fruit & vegetables), bought more national brands, and responded more strongly to price promotions in both virtual reality and pictorial representations than in the physical store. Implications for the use of virtual reality in studies of consumer food choice behavior as well as for future improvement of virtual reality techniques are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    PubMed

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  20. Brave New World

    ERIC Educational Resources Information Center

    Panettieri, Joseph C.

    2007-01-01

    Across the globe, progressive universities are embracing any number of MUVEs (multi-user virtual environments), 3D environments, and "immersive" virtual reality tools. And within the next few months, several universities are expected to test so-called "telepresence" videoconferencing systems from Cisco Systems and other leading…

  1. Improving flexible thinking in deaf and hard of hearing children with virtual reality technology.

    PubMed

    Passig, D; Eden, S

    2000-07-01

    The study investigated whether rotating three-dimensional (3-D) objects using virtual reality (VR) will affect flexible thinking in deaf and hard of hearing children. Deaf and hard of hearing subjects were distributed into experimental and control groups. The experimental group played virtual 3-D Tetris (a game using VR technology) individually, 15 minutes once weekly over 3 months. The control group played conventional two-dimensional (2-D) Tetris over the same period. Children with normal hearing participated as a second control group in order to establish whether deaf and hard of hearing children really are disadvantaged in flexible thinking. Before-and-after testing showed significantly improved flexible thinking in the experimental group; the deaf and hard of hearing control group showed no significant improvement. Also, before the experiment, the deaf and hard of hearing children scored lower in flexible thinking than the children with normal hearing. After the experiment, the difference between the experimental group and the control group of children with normal hearing was smaller.

  2. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  3. Virtual Reality in Neurointervention.

    PubMed

    Ong, Chin Siang; Deib, Gerard; Yesantharao, Pooja; Qiao, Ye; Pakpoor, Jina; Hibino, Narutoshi; Hui, Ferdinand; Garcia, Juan R

    2018-06-01

    Virtual reality (VR) allows users to experience realistic, immersive 3D virtual environments with the depth perception and binocular field of view of real 3D settings. Newer VR technology has now allowed for interaction with 3D objects within these virtual environments through the use of VR controllers. This technical note describes our preliminary experience with VR as an adjunct tool to traditional angiographic imaging in the preprocedural workup of a patient with a complex pseudoaneurysm. Angiographic MRI data was imported and segmented to create 3D meshes of bilateral carotid vasculature. The 3D meshes were then projected into VR space, allowing the operator to inspect the carotid vasculature using a 3D VR headset as well as interact with the pseudoaneurysm (handling, rotation, magnification, and sectioning) using two VR controllers. 3D segmentation of a complex pseudoaneurysm in the distal cervical segment of the right internal carotid artery was successfully performed and projected into VR. Conventional and VR visualization modes were equally effective in identifying and classifying the pathology. VR visualization allowed the operators to manipulate the dataset to achieve a greater understanding of the anatomy of the parent vessel, the angioarchitecture of the pseudoaneurysm, and the surface contours of all visualized structures. This preliminary study demonstrates the feasibility of utilizing VR for preprocedural evaluation in patients with anatomically complex neurovascular disorders. This novel visualization approach may serve as a valuable adjunct tool in deciding patient-specific treatment plans and selection of devices prior to intervention.

  4. [3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].

    PubMed

    Kneist, W; Huber, T; Paschold, M; Lang, H

    2016-06-01

    The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.

  5. CAVE2: a hybrid reality environment for immersive simulation and information analysis

    NASA Astrophysics Data System (ADS)

    Febretti, Alessandro; Nishimoto, Arthur; Thigpen, Terrance; Talandis, Jonas; Long, Lance; Pirtle, J. D.; Peterka, Tom; Verlo, Alan; Brown, Maxine; Plepys, Dana; Sandin, Dan; Renambot, Luc; Johnson, Andrew; Leigh, Jason

    2013-03-01

    Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world's first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.

  6. The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning

    ERIC Educational Resources Information Center

    Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar

    2017-01-01

    Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…

  7. The Application of Modeling and Simulation to the Behavioral Deficit of Autism

    NASA Technical Reports Server (NTRS)

    Anton, John J.

    2010-01-01

    This abstract describes a research effort to apply technological advances in virtual reality simulation and computer-based games to create behavioral modification programs for individuals with Autism Spectrum Disorder (ASD). The research investigates virtual social skills training within a 3D game environment to diminish the impact of ASD social impairments and to increase learning capacity for optimal intellectual capability. Individuals with autism will encounter prototypical social contexts via computer interface and will interact with 3D avatars with predefined roles within a game-like environment. Incremental learning objectives will combine to form a collaborative social environment. A secondary goal of the effort is to begin the research and development of virtual reality exercises aimed at triggering the release of neurotransmitters to promote critical aspects of synaptic maturation at an early age to change the course of the disease.

  8. Using Digital Earth to create online scientific reality tourist guides to tourist attractions in Taiwan, China

    NASA Astrophysics Data System (ADS)

    Ding, Yea-Chung

    2010-11-01

    In recent years national parks worldwide have introduced online virtual tourism, through which potential visitors can search for tourist information. Most virtual tourism websites are a simulation of an existing location, usually composed of panoramic images, a sequence of hyperlinked still or video images, and/or virtual models of the actual location. As opposed to actual tourism, a virtual tour is typically accessed on a personal computer or an interactive kiosk. Using modern Digital Earth techniques such as high resolution satellite images, precise GPS coordinates and powerful 3D WebGIS, however, it's possible to create more realistic scenic models to present natural terrain and man-made constructions in greater detail. This article explains how to create an online scientific reality tourist guide for the Jinguashi Gold Ecological Park at Jinguashi in northern Taiwan, China. This project uses high-resolution Formosat 2 satellite images and digital aerial images in conjunction with DTM to create a highly realistic simulation of terrain, with the addition of 3DMAX to add man-made constructions and vegetation. Using this 3D Geodatabase model in conjunction with INET 3D WebGIS software, we have found Digital Earth concept can greatly improve and expand the presentation of traditional online virtual tours on the websites.

  9. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery

    PubMed Central

    Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J.; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng

    2017-01-01

    Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the “integrated image” on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. PMID:28198442

  10. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery.

    PubMed

    Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng

    2017-02-15

    Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications.

  11. Virtual Application of Darul Arif Palace from Serdang Sultanate using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Annisa, T.; Rahmat, R. F.; Muchtar, M. A.

    2017-01-01

    Serdang Sultanate is one of Malay Sultanate in Sumatera Utara. In the 18th century, many Malay Aristocrats have developed in Sumatera Utara. Social revolution has happened in 1946, many sultanates were overthrown and member of PKI (Communist Party of Indonesia) did mass killing on members of the sultanate families. As the results of this incident, many cultural and historical heritage destroyed. The integration of heritage preservation and the digital technology has become recent trend. The digital technology is not only able to record, preserve detailed documents and information of heritage completely, but also effectively bring the value-added. In this research, polygonal modelling techniques from 3D modelling technology is used to reconstruct Darul Arif Palace of Serdang Sultanate. After modelling the palace, it will be combined with virtual reality technology to allow user to explore the palace and the environment around the palace. Virtual technology is simulation of real objects in virtual world. The results in this research is that virtual reality application can run using Head-Mounted Display.

  12. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 4 Report: Virtual Mockup Maintenance Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.

  13. Accuracy of fetal sex determination in the first trimester of pregnancy using 3D virtual reality ultrasound.

    PubMed

    Bogers, Hein; Rifouna, Maria S; Koning, Anton H J; Husen-Ebbinge, Margreet; Go, Attie T J I; van der Spek, Peter J; Steegers-Theunissen, Régine P M; Steegers, Eric A P; Exalto, Niek

    2018-05-01

    Early detection of fetal sex is becoming more popular. The aim of this study was to evaluate the accuracy of fetal sex determination in the first trimester, using 3D virtual reality. Three-dimensional (3D) US volumes were obtained in 112 pregnancies between 9 and 13 weeks of gestational age. They were offline projected as a hologram in the BARCO I-Space and subsequently the genital tubercle angle was measured. Separately, the 3D US aspect of the genitalia was examined for having a male or female appearance. Although a significant difference in genital tubercle angles was found between male and female fetuses, it did not result in a reliable prediction of fetal gender. Correct sex prediction based on first trimester genital appearance was at best 56%. Our results indicate that accurate determination of the fetal sex in the first trimester of pregnancy is not possible, even using an advanced 3D US technique. © 2017 Wiley Periodicals, Inc.

  14. [Subjective sensations indicating simulator sickness and fatigue after exposure to virtual reality].

    PubMed

    Malińska, Marzena; Zuzewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2014-01-01

    The study assessed the incidence and intensity of subjective symptoms indicating simulator sickness among the persons with no inclination to motion sickness, immersed in virtual reality (VR) by watching an hour long movie in the stereoscopic (three-dimensional - 3D) and non-stereoscopic (two-dimensional - 2D) versions and after an hour long training using virtual reality, called sVR. The sample comprised 20 healthy young men with no inclination to motion sickness. The participants' subjective sensations, indicating symptoms of simulator sickness were assessed using the questionnaire completed by the participants immediately, 20 min and 24 h following the test. Grandjean's scale was used to assess fatigue and mood. The symptoms were observed immediately after the exposure to sVR. Their intensity was higher than after watching the 2D and 3D movies. A significant relationship was found between the eye pain and the type of exposure (2D, 3D and sVR) (Chi2)(2) = 6.225, p < or = 0.05); the relationship between excessive perspiration and the exposure to 31) movie and sVR was also noted (Chi2(1) = 9.173, p < or = 0.01). Some symptoms were still observed 20 min after exposure to sVR. The comparison of Grandjean's scale results before and after the training in sVR handing showed significant differences in 11 out of 14 subscales. Before and after exposure to 3D movie, the differences were significant only for the "tired-fatigued" subscale (Z = 2.501, p < or = 0.012) in favor of "fatigued". Based on the subjective sensation of discomfort after watching 2D and 3D movies it is impossible to predict symptoms of simulator sickness after training using sVR.

  15. Real-time 3D image reconstruction guidance in liver resection surgery.

    PubMed

    Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-04-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.

  16. Research on virtual Guzheng based on Kinect

    NASA Astrophysics Data System (ADS)

    Li, Shuyao; Xu, Kuangyi; Zhang, Heng

    2018-05-01

    There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.

  17. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    PubMed

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  18. Interactive voxel graphics in virtual reality

    NASA Astrophysics Data System (ADS)

    Brody, Bill; Chappell, Glenn G.; Hartman, Chris

    2002-06-01

    Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.

  19. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  20. The dynamics of student learning within a high school virtual reality design class

    NASA Astrophysics Data System (ADS)

    Morales, Teresa M.

    This mixed method study investigated knowledge and skill development of high school students in a project-based VR design class, in which 3-D projects were developed within a student-centered, student-directed environment. This investigation focused on student content learning, and problem solving. Additionally the social dynamics of the class and the role of peer mentoring were examined to determine how these factors influenced student behavior and learning. Finally, parent and teachers perceptions of the influence of the class were examined. The participants included freshmen through senior students, parents, teachers and the high school principal. Student interviews and classroom observations were used to collect data from students, while teachers and parents completed surveys. The results of this study suggested that this application of virtual reality (VR) learning environment promoted the development of; meaningful cognitive experiences, creativity, leadership, global socialization, problem solving and a deeper understanding of academic content. Further theoretical implications for 3-D virtual reality technology are exceedingly promising, and warrant additional research and development as an instructional tool for practical use.

  1. Computer-Based Technologies in Dentistry: Types and Applications

    PubMed Central

    Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh

    2016-01-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819

  2. Computer-Based Technologies in Dentistry: Types and Applications.

    PubMed

    Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh

    2016-06-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.

  3. The Development of a Virtual Dinosaur Museum

    ERIC Educational Resources Information Center

    Tarng, Wernhuar; Liou, Hsin-Hun

    2007-01-01

    The objective of this article is to study the network and virtual reality technologies for developing a virtual dinosaur museum, which provides a Web-learning environment for students of all ages and the general public to know more about dinosaurs. We first investigate the method for building the 3D dynamic models of dinosaurs, and then describe…

  4. Virtual reality hardware and graphic display options for brain-machine interfaces

    PubMed Central

    Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.

    2009-01-01

    Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069

  5. The virtual mirror: a new interaction paradigm for augmented reality environments.

    PubMed

    Bichlmeier, Christoph; Heining, Sandro Michael; Feuerstein, Marco; Navab, Nassir

    2009-09-01

    Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.

  6. New database for improving virtual system “body-dress”

    NASA Astrophysics Data System (ADS)

    Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.

    2017-10-01

    The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.

  7. [A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation].

    PubMed

    Marescaux, J; Clément, J M; Nord, M; Russier, Y; Tassetti, V; Mutter, D; Cotin, S; Ayache, N

    1997-11-01

    Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world: virtual reality. Virtual reality extends the perception of our five senses by representing more than the real state of things by the means of computer sciences and robotics. It consists of three concepts: immersion, navigation and interaction. Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements: visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.

  8. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  9. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  10. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  11. The use of PC based VR in clinical medicine: the VREPAR projects.

    PubMed

    Riva, G; Bacchetta, M; Baruffi, M; Borgomainerio, E; Defrance, C; Gatti, F; Galimberti, C; Fontaneto, S; Marchi, S; Molinari, E; Nugues, P; Rinaldi, S; Rovetta, A; Ferretti, G S; Tonci, A; Wann, J; Vincelli, F

    1999-01-01

    Virtual reality (VR) is an emerging technology that alters the way individuals interact with computers: a 3D computer-generated environment in which a person can move about and interact as if he actually was inside it. Given to the high computational power required to create virtual environments, these are usually developed on expensive high-end workstations. However, the significant advances in PC hardware that have been made over the last three years, are making PC-based VR a possible solution for clinical assessment and therapy. VREPAR - Virtual Reality Environments for Psychoneurophysiological Assessment and Rehabilitation - are two European Community funded projects (Telematics for health - HC 1053/HC 1055 - http://www.psicologia.net) that are trying to develop a modular PC-based virtual reality system for the medical market. The paper describes the rationale of the developed modules and the preliminary results obtained.

  12. Immersive Virtual Reality Therapy with Myoelectric Control for Treatment-resistant Phantom Limb Pain: Case Report.

    PubMed

    Chau, Brian; Phelan, Ivan; Ta, Phillip; Humbert, Sarah; Hata, Justin; Tran, Duc

    2017-01-01

    Objective: Phantom limb pain is a condition frequently experienced after amputation. One treatment for phantom limb pain is traditional mirror therapy, yet some patients do not respond to this intervention, and immersive virtual reality mirror therapy offers some potential advantages. We report the case of a patient with severe phantom limb pain following an upper limb amputation and successful treatment with therapy in a custom virtual reality environment. Methods: An interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked, head-mounted virtual reality display. The patient used myoelectric control of a virtual hand as well as motion-tracking control in this setting for five therapy sessions. Pain scale measurements and subjective feedback was elicited at each session. Results: Analysis of the measured pain scales showed statistically significant decreases per session [Visual Analog Scale, Short Form McGill Pain Questionnaire, and Wong-Baker FACES pain scores decreased by 55 percent (p=0.0143), 60 percent (p=0.023), and 90 percent (p=0.0024), respectively]. Significant subjective pain relief persisting between sessions was also reported, as well as marked immersion within the virtual environments. On followup at six weeks, the patient noted continued decrease in phantom limb pain symptoms. Conclusions: Currently available immersive virtual reality technology with myolectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain.

  13. Immersive Virtual Reality Therapy with Myoelectric Control for Treatment-resistant Phantom Limb Pain: Case Report

    PubMed Central

    Phelan, Ivan; Ta, Phillip; Humbert, Sarah; Hata, Justin; Tran, Duc

    2017-01-01

    Objective: Phantom limb pain is a condition frequently experienced after amputation. One treatment for phantom limb pain is traditional mirror therapy, yet some patients do not respond to this intervention, and immersive virtual reality mirror therapy offers some potential advantages. We report the case of a patient with severe phantom limb pain following an upper limb amputation and successful treatment with therapy in a custom virtual reality environment. Methods: An interactive 3-D kitchen environment was developed based on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked, head-mounted virtual reality display. The patient used myoelectric control of a virtual hand as well as motion-tracking control in this setting for five therapy sessions. Pain scale measurements and subjective feedback was elicited at each session. Results: Analysis of the measured pain scales showed statistically significant decreases per session [Visual Analog Scale, Short Form McGill Pain Questionnaire, and Wong-Baker FACES pain scores decreased by 55 percent (p=0.0143), 60 percent (p=0.023), and 90 percent (p=0.0024), respectively]. Significant subjective pain relief persisting between sessions was also reported, as well as marked immersion within the virtual environments. On followup at six weeks, the patient noted continued decrease in phantom limb pain symptoms. Conclusions: Currently available immersive virtual reality technology with myolectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain. PMID:29616149

  14. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  15. Extensible 3D (X3D) Earth Technical Requirements Workshop Summary Report

    DTIC Science & Technology

    2007-08-01

    world in detail already, but rarely interconnect on to another • Most interesting part of “virtual reality” (VR) is reality – which means physics... Two Web-Enabled Modeling and Simulation (WebSim) symposia have demonstrated that large partnerships can work 9. Server-side 3D graphics • Our

  16. Journey to the centre of the cell: Virtual reality immersion into scientific data.

    PubMed

    Johnston, Angus P R; Rae, James; Ariotti, Nicholas; Bailey, Benjamin; Lilja, Andrew; Webb, Robyn; Ferguson, Charles; Maher, Sheryl; Davis, Thomas P; Webb, Richard I; McGhee, John; Parton, Robert G

    2018-02-01

    Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three-dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two-dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block-face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a "real" cell. Early testing of this immersive environment indicates a significant improvement in students' understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Generating classes of 3D virtual mandibles for AR-based medical simulation.

    PubMed

    Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P

    2008-01-01

    Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.

  18. Virtual reality in surgery and medicine.

    PubMed

    Chinnock, C

    1994-01-01

    This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time. Surgical robots are likely to be deployed for specific tasks in the operating room (OR) and to support telesurgery applications. Technical developments in robotics and motion control are key components of many virtual reality systems. Since almost all of the virtual reality and enhanced reality systems will be digitally based, they are also capable of being put "on-line" for tele-training, consulting, and even surgery. Advancements in virtual and enhanced reality systems will be driven in part by consumer applications of this technology. Many of the companies that will supply systems for medical applications are also working on commercial products. A big consumer hit can benefit the entire industry by increasing volumes and bringing down costs.(ABSTRACT TRUNCATED AT 400 WORDS)

  19. Advanced 3-dimensional planning in neurosurgery.

    PubMed

    Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni

    2013-01-01

    During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.

  20. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    NASA Astrophysics Data System (ADS)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  1. Application of 3d Model of Cultural Relics in Virtual Restoration

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.

    2018-04-01

    In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  2. The Use of Internet Resources and Browser-Based Virtual Worlds in Teaching Grammar

    ERIC Educational Resources Information Center

    Kruk, Mariusz

    2014-01-01

    Online virtual worlds are becoming important tools in foreign/second language instruction in view of the fact that they enhance learner motivation, promote autonomy and social presence in a 3D environment. Virtual worlds are a type of reality in which students can meet and communicate with other learners in the target language using text, voice or…

  3. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    NASA Astrophysics Data System (ADS)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  4. A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.

    PubMed

    Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson

    2017-01-01

    In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.

  5. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... two together," recalls Arie Kaufman, chairman of the computer science department at New York's Stony Brook University. Dr. Kaufman is one of the world's leading researchers in the high-tech medical fields of biomedical visualization, computer graphics, virtual reality, and multimedia. The year was ...

  6. Virtual reality: past, present and future.

    PubMed

    Gobbetti, E; Scateni, R

    1998-01-01

    This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.

  7. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  8. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  9. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  10. Enhancing Pre-Service Teachers' Awareness to Pupils' Test-Anxiety with 3D Immersive Simulation

    ERIC Educational Resources Information Center

    Passig, David; Moshe, Ronit

    2008-01-01

    This study investigated whether participating in a 3D immersive virtual reality world simulating the experience of test-anxiety would affect preservice teachers' awareness to the phenomenon. Ninety subjects participated in this study, and were divided into three groups. The experimental group experienced a 3D immersive simulation which made…

  11. Virtual reality interactive media for universitas sumatera utara - a campus introduction and simulation

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Anthonius; Muchtar, M. A.; Hizriadi, A.; Syahputra, M. F.

    2018-03-01

    Universitas Sumatera Utara is one of the public universities that have over 100 buildings with total area of more than 133.141 square meters. Information delivery on the location of the institutional buildings becomes challenging since the university land reaches 93.4 Ha. The information usually delivers orally, in video presentation and in the form of two-dimensional such as maps, posters, and brochures. These three techniques of information delivery have their advantages and disadvantages. Thus, we know that virtual reality has come to existence, touching every domain of knowledge. In this paper we study and implement virtual reality as a new approach to distribute the information to cover all of the deficiencies. The utilization of virtual reality technology combined with 3D modeling is aims to introduce and inform the location of USU institutional buildings in interactive and innovative ways. With the application existence, the campus introduction is expected to be more convenient so that all the USU students will be able to find the exact location of the building they are headed for.

  12. Introduction to Virtual Reality in Education

    ERIC Educational Resources Information Center

    Dede, Chris

    2009-01-01

    As an emerging technology for learning, virtual reality (VR) dates back four decades, to early work by Ivan Sutherland in the late 1960s. At long last, interactive media are emerging that offer the promise of VR in everyday settings. Quasi-VR already is commonplace in 2-1/2-D virtual environments like Second Life and in massively multiplayer…

  13. Visuospatial Attention in Children with Autism Spectrum Disorder: A Comparison between 2-D and 3-D Environments

    ERIC Educational Resources Information Center

    Ip, Horace H. S.; Lai, Candy Hoi-Yan; Wong, Simpson W. L.; Tsui, Jenny K. Y.; Li, Richard Chen; Lau, Kate Shuk-Ying; Chan, Dorothy F. Y.

    2017-01-01

    Previous research has illustrated the unique benefits of three-dimensional (3-D) Virtual Reality (VR) technology in Autism Spectrum Disorder (ASD) children. This study examined the use of 3-D VR technology as an assessment tool in ASD children, and further compared its use to two-dimensional (2-D) tasks. Additionally, we aimed to examine…

  14. Simulating 3D deformation using connected polygons

    NASA Astrophysics Data System (ADS)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  15. Implementation of augmented reality to models sultan deli

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.

    2018-03-01

    Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.

  16. Hearing in True 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In 1984, researchers from Ames Research Center came together to develop advanced human interfaces for NASA s teleoperations that would come to be known as "virtual reality." The basis of the work theorized that if the sensory interfaces met a certain threshold and sufficiently supported each other, then the operator would feel present in the remote/synthetic environment, rather than present in their physical location. Twenty years later, this prolific research continues to pay dividends to society in the form of cutting-edge virtual reality products, such as an interactive audio simulation system.

  17. Visualization of reservoir simulation data with an immersive virtual reality system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, B.K.

    1996-10-01

    This paper discusses an investigation into the use of an immersive virtual reality (VR) system to visualize reservoir simulation output data. The hardware and software configurations of the test-immersive VR system are described and compared to a nonimmersive VR system and to an existing workstation screen-based visualization system. The structure of 3D reservoir simulation data and the actions to be performed on the data within the VR system are discussed. The subjective results of the investigation are then presented, followed by a discussion of possible future work.

  18. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  19. Digitalized preservation and presentation of historical building - taking traditional temples and dougong as examples

    NASA Astrophysics Data System (ADS)

    Yang, W. B.; Yen, Y. N.; Cheng, H. M.

    2015-08-01

    The integration of preservation of heritage and the digital technology is an important international trend in the 21st century. The digital technology not only is able to record and preserve detailed documents and information of heritage completely, but also brings the value-added features effectively. In this study, 3D laser scanning is used to perform the digitalized archives for the interior and exterior body work of the building which contains integration of 3D scanner technology, mobile scanning collaboration and multisystem reverse modeling and integration technology. The 3D model is built by combining with multi-media presentations and reversed modeling in real scale to perform the simulation of virtual reality (VR). With interactive teaching and presentation of augmented reality to perform the interaction technology to extend the continuously update in traditional architecture information. With the upgrade of the technology and value-added in digitalization, the cultural asset value can be experienced through 3D virtual reality which makes the information presentation from the traditional reading in the past toward user operation with sensory experience and keep exploring the possibilities and development of cultural asset preservation by using digital technology makes the presentation and learning of cultural asset information toward diversification.

  20. Myths and Realities

    ERIC Educational Resources Information Center

    Atkinson, Tom

    2008-01-01

    Second Life[TM], or simply SL, was developed at Linden Lab, a San Francisco-based corporation defined by its creators as "an online society within a 3-D virtual world entirely built and owned by its residents, where they can explore, build, socialize and participate in their own economy." With over 14 million residents in the SL virtual community,…

  1. Virtual exertions: evoking the sense of exerting forces in virtual reality using gestures and muscle activity.

    PubMed

    Chen, Karen B; Ponto, Kevin; Tredinnick, Ross D; Radwin, Robert G

    2015-06-01

    This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene. © 2014, Human Factors and Ergonomics Society.

  2. Applying Augmented Reality in practical classes for engineering students

    NASA Astrophysics Data System (ADS)

    Bazarov, S. E.; Kholodilin, I. Yu; Nesterov, A. S.; Sokhina, A. V.

    2017-10-01

    In this article the Augmented Reality application for teaching engineering students of electrical and technological specialties is introduced. In order to increase the motivation for learning and the independence of students, new practical guidelines on Augmented Reality were developed in the application to practical classes. During the application development, the authors used software such as Unity 3D and Vuforia. The Augmented Reality content consists of 3D-models, images and animations, which are superimposed on real objects, helping students to study specific tasks. A user who has a smartphone, a tablet PC, or Augmented Reality glasses can visualize on-screen virtual objects added to a real environment. Having analyzed the current situation in higher education: the learner’s interest in studying, their satisfaction with the educational process, and the impact of the Augmented Reality application on students, a questionnaire was developed and offered to students; the study involved 24 learners.

  3. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598

  4. The force pyramid: a spatial analysis of force application during virtual reality brain tumor resection.

    PubMed

    Azarnoush, Hamed; Siar, Samaneh; Sawaya, Robin; Zhrani, Gmaan Al; Winkler-Schwartz, Alexander; Alotaibi, Fahad Eid; Bugdadi, Abdulgadir; Bajunaid, Khalid; Marwa, Ibrahim; Sabbagh, Abdulrahman Jafar; Del Maestro, Rolando F

    2017-07-01

    OBJECTIVE Virtual reality simulators allow development of novel methods to analyze neurosurgical performance. The concept of a force pyramid is introduced as a Tier 3 metric with the ability to provide visual and spatial analysis of 3D force application by any instrument used during simulated tumor resection. This study was designed to answer 3 questions: 1) Do study groups have distinct force pyramids? 2) Do handedness and ergonomics influence force pyramid structure? 3) Are force pyramids dependent on the visual and haptic characteristics of simulated tumors? METHODS Using a virtual reality simulator, NeuroVR (formerly NeuroTouch), ultrasonic aspirator force application was continually assessed during resection of simulated brain tumors by neurosurgeons, residents, and medical students. The participants performed simulated resections of 18 simulated brain tumors with different visual and haptic characteristics. The raw data, namely, coordinates of the instrument tip as well as contact force values, were collected by the simulator. To provide a visual and qualitative spatial analysis of forces, the authors created a graph, called a force pyramid, representing force sum along the z-coordinate for different xy coordinates of the tool tip. RESULTS Sixteen neurosurgeons, 15 residents, and 84 medical students participated in the study. Neurosurgeon, resident and medical student groups displayed easily distinguishable 3D "force pyramid fingerprints." Neurosurgeons had the lowest force pyramids, indicating application of the lowest forces, followed by resident and medical student groups. Handedness, ergonomics, and visual and haptic tumor characteristics resulted in distinct well-defined 3D force pyramid patterns. CONCLUSIONS Force pyramid fingerprints provide 3D spatial assessment displays of instrument force application during simulated tumor resection. Neurosurgeon force utilization and ergonomic data form a basis for understanding and modulating resident force application and improving patient safety during tumor resection.

  5. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  6. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  7. Multimodal Image-Based Virtual Reality Presurgical Simulation and Evaluation for Trigeminal Neuralgia and Hemifacial Spasm.

    PubMed

    Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei

    2018-05-01

    To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC and in predicting the degree of root compression. The VR image-based simulation correlated well with the real surgical view. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Augmented reality 3D display based on integral imaging

    NASA Astrophysics Data System (ADS)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  9. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    PubMed

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology.

  10. ARC-1995-AC95-0368-3

    NASA Image and Video Library

    1995-10-27

    Dr Murial Ross's Virtual Reality Application for Neuroscience Research Biocomputation. To study human disorders of balance and space motion sickness. Shown here is a 3D reconstruction of a nerve ending in inner ear, nature's wiring of balance organs.

  11. Using voice input and audio feedback to enhance the reality of a virtual experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less

  12. Language-driven anticipatory eye movements in virtual reality.

    PubMed

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  13. The STRIVE-ONR Project: Stress Resistance in Virtual Environments

    DTIC Science & Technology

    2015-07-29

    from the Virtual Iraq/Afghanistan Virtual Reality Exposure Therapy (VRET) system at the University of Southern California Institute for Creative...better sense of health outcomes; that is, "how the social environment exerts a cumulative impact on the physical and mental well being of individuals...levels with functional decline in elderly men and women. Geriatrics & gerontology international, 9 3, 282-289. Goldman, N., Turra, C. M., Glei, D

  14. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  15. Integrating 4-d light-sheet imaging with interactive virtual reality to recapitulate developmental cardiac mechanics and physiology

    NASA Astrophysics Data System (ADS)

    Ding, Yichen; Yu, Jing; Abiri, Arash; Abiri, Parinaz; Lee, Juhyun; Chang, Chih-Chiang; Baek, Kyung In; Sevag Packard, René R.; Hsiai, Tzung K.

    2018-02-01

    There currently is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3- dimensional (3-D) architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3-D and 4-D (3-D spatial + 1-D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods such as routine optical microscopes. We hereby demonstrate multi-scale applicability of VR-LSFM to 1) interrogate skin fibroblasts interacting with a hyaluronic acid-based hydrogel, 2) navigate through the endocardial trabecular network during zebrafish development, and 3) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation (BINS) algorithm with deformable image registration (DIR) to interface a VR environment for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution.

  16. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.

  17. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  18. Toward Educational Virtual Worlds: Should Identity Federation Be a Concern?

    ERIC Educational Resources Information Center

    Cruz, Gonçalo; Costa, António; Martins, Paulo; Gonçalves, Ramiro; Barroso, João

    2015-01-01

    3D Virtual Worlds are being used for education and training purposes in a cross-disciplinary way. However, its widespread adoption, particularly in formal learning contexts, is far from being a reality due a broad range of technological challenges. In this reflection paper, our main goal is to argue why and how identity federation should be…

  19. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  20. Virtual Reality Exploration and Planning for Precision Colorectal Surgery.

    PubMed

    Guerriero, Ludovica; Quero, Giuseppe; Diana, Michele; Soler, Luc; Agnus, Vincent; Marescaux, Jacques; Corcione, Francesco

    2018-06-01

    Medical software can build a digital clone of the patient with 3-dimensional reconstruction of Digital Imaging and Communication in Medicine images. The virtual clone can be manipulated (rotations, zooms, etc), and the various organs can be selectively displayed or hidden to facilitate a virtual reality preoperative surgical exploration and planning. We present preliminary cases showing the potential interest of virtual reality in colorectal surgery for both cases of diverticular disease and colonic neoplasms. This was a single-center feasibility study. The study was conducted at a tertiary care institution. Two patients underwent a laparoscopic left hemicolectomy for diverticular disease, and 1 patient underwent a laparoscopic right hemicolectomy for cancer. The 3-dimensional virtual models were obtained from preoperative CT scans. The virtual model was used to perform preoperative exploration and planning. Intraoperatively, one of the surgeons was manipulating the virtual reality model, using the touch screen of a tablet, which was interactively displayed to the surgical team. The main outcome was evaluation of the precision of virtual reality in colorectal surgery planning and exploration. In 1 patient undergoing laparoscopic left hemicolectomy, an abnormal origin of the left colic artery beginning as an extremely short common trunk from the inferior mesenteric artery was clearly seen in the virtual reality model. This finding was missed by the radiologist on CT scan. The precise identification of this vascular variant granted a safe and adequate surgery. In the remaining cases, the virtual reality model helped to precisely estimate the vascular anatomy, providing key landmarks for a safer dissection. A larger sample size would be necessary to definitively assess the efficacy of virtual reality in colorectal surgery. Virtual reality can provide an enhanced understanding of crucial anatomical details, both preoperatively and intraoperatively, which could contribute to improve safety in colorectal surgery.

  1. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning achievement. A…

  2. [Registration technology for mandibular angle osteotomy based on augmented reality].

    PubMed

    Zhu, Ming; Chai, Gang; Zhang, Yan; Ma, Xiao-Fei; Yu, Zhe-Yuan; Zhu, Yi-Jia

    2010-12-01

    To establish an effective path to register the operative plan to the real model of mandible made by rapid prototyping (RP) technology. Computerize tomography (CT) was performed on 20 patients to create 3D images, and computer aided operation planning information can be merged with the 3D images. Then dental cast was used to fix the signal which can be recognized by the software. The dental cast was transformed to 3D data with a laser scanner and a programmer that run on a personal computer named Rapidform matching the dental cast and the mandible image to generate the virtual image. Then the registration was achieved by video monitoring system. By using this technology, the virtual image of mandible and the cutting planes both can overlay the real model of mandible made by RP. This study found an effective way for registration by using dental cast, and this way might be a powerful option for the registration of augmented reality. Supported by Program for Innovation Research Team of Shanghai Municipal Education Commission.

  3. Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition

    NASA Astrophysics Data System (ADS)

    Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.

    2017-05-01

    In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).

  4. Augmented reality for breast imaging.

    PubMed

    Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio

    2018-06-01

    Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. Gadolinium was injected as a contrast agent (0.1 mmol/kg at 2 mL/s) using a programmable power injector. Dicom formatted images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into augmented reality images. ABI demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. ABI can improve clinical outcomes, providing an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.

  5. Subjective visual vertical assessment with mobile virtual reality system.

    PubMed

    Ulozienė, Ingrida; Totilienė, Milda; Paulauskas, Andrius; Blažauskas, Tomas; Marozas, Vaidotas; Kaski, Diego; Ulozas, Virgilijus

    2017-01-01

    The subjective visual vertical (SVV) is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions - static, dynamic and an immersive real-world ("boat in the sea") SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two controllers (P<0.01). The median of virtual reality-induced dizziness for both devices was 0.7. The mobile virtual reality based system for implementation of subjective visual vertical test, is accurate and applicable in the clinical environment. The gamepad-based virtual object control method was preferred by the users. The tests were well tolerated with low dizziness scores in the majority of patients. Copyright © 2018 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. All rights reserved.

  6. The Use of Virtual Reality in the Production of Cue-Specific Craving for Cigarettes: A Meta-Analysis.

    PubMed

    Pericot-Valverde, Irene; Germeroth, Lisa J; Tiffany, Stephen T

    2016-05-01

    The cue-reactivity procedure has demonstrated that smokers respond with increases in subjective craving in the presence of smoking-related cues. Virtual reality is an emerging mode of cue presentation for cue-reactivity research. Despite the successful implementation of virtual reality during the last decade, no systematic review has investigated the magnitude of effects across studies. This research systematically reviewed findings from studies using virtual reality in cigarette craving assessment. Eligible studies assessed subjective craving for cigarettes in smokers exposed to smoking-related and neutral environments. Cohen's d was used to assess differences in craving between smoking-related and nonsmoking-related virtual environments. A random effects approach was used to combine effect sizes. A total of 18 studies involving 541 smokers was included in the final analyses. Environments with smoking-related cues produced significant increases in craving relative to environments without smoking-related cues. The mean overall effect size (Cohen's d) was 1.041 (SE = 0.12, 95% CI = 0.81 to 1.28, Z = 8.68, P < .001). The meta-analysis suggested that presentations of smoking cues through virtual reality can produce strong increases in craving among cigarette smokers. This strong cue-reactivity effect, which was comparable in magnitude to the craving effect sizes found with more conventional modes of cue presentation, supports the use of virtual reality for the generation of robust cue-specific craving in cue-reactivity research. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Grasping trajectories in a virtual environment adhere to Weber's law.

    PubMed

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  8. Routine clinical application of virtual reality in abdominal surgery.

    PubMed

    Sampogna, Gianluca; Pugliese, Raffaele; Elli, Marco; Vanzulli, Angelo; Forgione, Antonello

    2017-06-01

    The advantages of 3D reconstruction, immersive virtual reality (VR) and 3D printing in abdominal surgery have been enunciated for many years, but still today their application in routine clinical practice is almost nil. We investigate their feasibility, user appreciation and clinical impact. Fifteen patients undergoing pancreatic, hepatic or renal surgery were studied realizing a 3D reconstruction of target anatomy. Then, an immersive VR environment was developed to import 3D models, and some details of the 3D scene were printed. All the phases of our workflow employed open-source software and low-cost hardware, easily implementable by other surgical services. A qualitative evaluation of the three approaches was performed by 20 surgeons, who filled in a specific questionnaire regarding a clinical case for each organ considered. Preoperative surgical planning and intraoperative guidance was feasible for all patients included in the study. The vast majority of surgeons interviewed scored their quality and usefulness as very good. Despite extra time, costs and efforts necessary to implement these systems, the benefits shown by the analysis of questionnaires recommend to invest more resources to train physicians to adopt these technologies routinely, even if further and larger studies are still mandatory.

  9. Virtual reality in radiology: virtual intervention

    NASA Astrophysics Data System (ADS)

    Harreld, Michael R.; Valentino, Daniel J.; Duckwiler, Gary R.; Lufkin, Robert B.; Karplus, Walter J.

    1995-04-01

    Intracranial aneurysms are the primary cause of non-traumatic subarachnoid hemorrhage. Morbidity and mortality remain high even with current endovascular intervention techniques. It is presently impossible to identify which aneurysms will grow and rupture, however hemodynamics are thought to play an important role in aneurysm development. With this in mind, we have simulated blood flow in laboratory animals using three dimensional computational fluid dynamics software. The data output from these simulations is three dimensional, complex and transient. Visualization of 3D flow structures with standard 2D display is cumbersome, and may be better performed using a virtual reality system. We are developing a VR-based system for visualization of the computed blood flow and stress fields. This paper presents the progress to date and future plans for our clinical VR-based intervention simulator. The ultimate goal is to develop a software system that will be able to accurately model an aneurysm detected on clinical angiography, visualize this model in virtual reality, predict its future behavior, and give insight into the type of treatment necessary. An associated database will give historical and outcome information on prior aneurysms (including dynamic, structural, and categorical data) that will be matched to any current case, and assist in treatment planning (e.g., natural history vs. treatment risk, surgical vs. endovascular treatment risks, cure prediction, complication rates).

  10. An optical tracking system for virtual reality

    NASA Astrophysics Data System (ADS)

    Hrimech, Hamid; Merienne, Frederic

    2009-03-01

    In this paper we present a low-cost 3D tracking system which we have developed and tested in order to move away from traditional 2D interaction techniques (keyboard and mouse) in an attempt to improve user's experience while using a CVE. Such a tracking system is used to implement 3D interaction techniques that augment user experience, promote user's sense of transportation in the virtual world as well as user's awareness of their partners. The tracking system is a passive optical tracking system using stereoscopy a technique allowing the reconstruction of three-dimensional information from a couple of images. We have currently deployed our 3D tracking system on a collaborative research platform for investigating 3D interaction techniques in CVEs.

  11. Effects of virtual reality-based bilateral upper-extremity training on brain activity in post-stroke patients.

    PubMed

    Lee, Su-Hyun; Kim, Yu-Mi; Lee, Byoung-Hee

    2015-07-01

    [Purpose] This study investigated the therapeutic effects of virtual reality-based bilateral upper-extremity training on brain activity in patients with stroke. [Subjects and Methods] Eighteen chronic stroke patients were divided into two groups: the virtual reality-based bilateral upper-extremity training group (n = 10) and the bilateral upper-limb training group (n = 8). The virtual reality-based bilateral upper-extremity training group performed bilateral upper-extremity exercises in a virtual reality environment, while the bilateral upper-limb training group performed only bilateral upper-extremity exercise. All training was conducted 30 minutes per day, three times per week for six weeks, followed by brain activity evaluation. [Results] Electroencephalography showed significant increases in concentration in the frontopolar 2 and frontal 4 areas, and significant increases in brain activity in the frontopolar 1 and frontal 3 areas in the virtual reality-based bilateral upper-extremity training group. [Conclusion] Virtual reality-based bilateral upper-extremity training can improve the brain activity of stroke patients. Thus, virtual reality-based bilateral upper-extremity training is feasible and beneficial for improving brain activation in stroke patients.

  12. Using virtual reality environment to improve joint attention associated with pervasive developmental disorder.

    PubMed

    Cheng, Yufang; Huang, Ruowen

    2012-01-01

    The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or dangerous consequences to deal with. Joint attention is a critical skill in the disorder characteristics of children with PDD. The absence of joint attention is a deficit frequently affects their social relationship in daily life. Therefore, this study designed the Joint Attention Skills Learning (JASL) systems with data glove tool to help children with PDD to practice joint attention behavior skills. The JASL specifically focus the skills of pointing, showing, sharing things and behavior interaction with other children with PDD. The system is designed in playroom-scene and presented in the first-person perspectives for users. The functions contain pointing and showing, moving virtual objects, 3D animation, text, speaking sounds, and feedback. The method was employed single subject multiple-probe design across subjects' designs, and analysis of visual inspection in this study. It took 3 months to finish the experimental section. Surprisingly, the experiment results reveal that the participants have further extension in improving the joint attention skills in their daily life after using the JASL system. The significant potential in this particular treatment of joint attention for each participant will be discussed in details in this paper. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  14. Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display.

    PubMed

    Mon-Williams, M; Wann, J P; Rushton, S

    1993-10-01

    The short-term effects on binocular stability of wearing a conventional head-mounted display (HMD) to explore a virtual reality environment were examined. Twenty adult subjects (aged 19-29 years) wore a commercially available HMD for 10 min while cycling around a computer generated 3-D world. The twin screen presentations were set to suit the average interpupillary distance of our subject population, to mimic the conditions of public access virtual reality systems. Subjects were examined before and after exposure to the HMD and there were clear signs of induced binocular stress for a number of the subjects. The implications of introducing such HMDs into the workplace and entertainment environments are discussed.

  15. Operating Room Performance Improves after Proficiency-Based Virtual Reality Cataract Surgery Training.

    PubMed

    Thomsen, Ann Sofia Skou; Bach-Holm, Daniella; Kjærbo, Hadi; Højgaard-Olsen, Klavs; Subhi, Yousif; Saleh, George M; Park, Yoon Soo; la Cour, Morten; Konge, Lars

    2017-04-01

    To investigate the effect of virtual reality proficiency-based training on actual cataract surgery performance. The secondary purpose of the study was to define which surgeons benefit from virtual reality training. Multicenter masked clinical trial. Eighteen cataract surgeons with different levels of experience. Cataract surgical training on a virtual reality simulator (EyeSi) until a proficiency-based test was passed. Technical performance in the operating room (OR) assessed by 3 independent, masked raters using a previously validated task-specific assessment tool for cataract surgery (Objective Structured Assessment of Cataract Surgical Skill). Three surgeries before and 3 surgeries after the virtual reality training were video-recorded, anonymized, and presented to the raters in random order. Novices (non-independently operating surgeons) and surgeons having performed fewer than 75 independent cataract surgeries showed significant improvements in the OR-32% and 38%, respectively-after virtual reality training (P = 0.008 and P = 0.018). More experienced cataract surgeons did not benefit from simulator training. The reliability of the assessments was high with a generalizability coefficient of 0.92 and 0.86 before and after the virtual reality training, respectively. Clinically relevant cataract surgical skills can be improved by proficiency-based training on a virtual reality simulator. Novices as well as surgeons with an intermediate level of experience showed improvement in OR performance score. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  16. [Development of a virtual model of fibro-bronchoscopy].

    PubMed

    Solar, Mauricio; Ducoing, Eugenio

    2011-09-01

    A virtual model of fibro-bronchoscopy is reported. The virtual model represents in 3D the trachea and the bronchi creating a virtual world of the bronchial tree. The bronchoscope is modeled to look over the bronchial tree imitating the displacement and rotation of the real bronchoscope. The parameters of the virtual model were gradually adjusted according to expert opinion and allowed the training of specialists with a virtual bronchoscope of great realism. The virtual bronchial tree provides clues of reality regarding the movement of the bronchoscope, creating the illusion that the virtual instrument is behaving as the real one with all the benefits in costs that this means.

  17. A Cognitive and Virtual Reality Treatment Program for the Fear of Flying.

    PubMed

    Ferrand, Margot; Ruffault, Alexis; Tytelman, Xavier; Flahault, Cécile; Négovanska, Vélina

    2015-08-01

    Passenger air transport has considerably increased in the past 50 yr. It is estimated that between 7 and 40% of the population of industrialized countries is currently afraid of flying. Programs treating the fear of flying have been developed to meet this problem. This study measures the effectiveness of one of these programs by focusing on flight-related anxiety before the program and after the first flight following the intervention. There were 157 individuals recruited to participate in a 1-d intervention aiming at treating the fear of flying, and using both cognitive behavioral techniques and virtual reality. Anxiety was measured with the Flight Anxiety Situations (FAS) and the Flight Anxiety Modality (FAM) questionnaires. Statistical analyses were conducted on 145 subjects (69.7% female; ages from 14 to 64) after the exclusion of individuals with missing data. The results showed a decrease in flight-related anxiety for each subscale of the two questionnaires: the somatic (d=2.44) and cognitive anxiety (d=1.47) subscales of the FAM, and the general flight anxiety (d=3.20), the anticipatory flight anxiety (d=1.74), and the in-flight anxiety (d=1.04) subscales of the FAS. The effectiveness of the treatment program using both cognitive behavioral techniques and virtual reality strategies for fear of flying reduced flight-related anxiety in the subjects in our study. Our results show that subjects demonstrated lower anxiety levels after the first flight following the program than before the intervention.

  18. Develop virtual joint laboratory for education like distance engineering system for robotic applications

    NASA Astrophysics Data System (ADS)

    Latinovic, T. S.; Deaconu, S. I.; Latinović, M. T.; Malešević, N.; Barz, C.

    2015-06-01

    This paper work with a new system that provides distance learning and online training engineers. The purpose of this paper is to develop and provide web-based system for the handling and control of remote devices via the Internet. Remote devices are currently the industry or mobile robots [13]. For future product development machine in the factory will be included in the system. This article also discusses the current use of virtual reality tools in the fields of science and engineering education. One programming tool in particular, virtual reality modeling language (VRML) is presented in the light of its applications and capabilities in the development of computer visualization tool for education. One contribution of this paper is to present the software tools and examples that can encourage educators to develop a virtual reality model to improve teaching in their discipline. [12] This paper aims to introduce a software platform, called VALIP where users can build, share, and manipulate 3D content in cooperation with the interaction processes in a 3D context, while participating hardware and software devices can be physical and / or logical distributed and connected together via the Internet. VALIP the integration of virtual laboratories to appropriate partners; therefore, allowing access to all laboratories in any of the partners in the project. VALIP provides advanced laboratory for training and research within robotics and production engineering, and thus, provides a great laboratory facilities with only having to invest a limited amount of resources at the local level to the partner site.

  19. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  20. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  1. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  2. Using Augmented Reality and Virtual Environments in Historic Places to Scaffold Historical Empathy

    ERIC Educational Resources Information Center

    Sweeney, Sara K.; Newbill, Phyllis; Ogle, Todd; Terry, Krista

    2018-01-01

    The authors explore how 3D visualizations of historical sites can be used as pedagogical tools to support historical empathy. They provide three visualizations created by a team at Virginia Tech as examples. They discuss virtual environments and how the digital restoration process is applied. They also define historical empathy, explain why it is…

  3. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  4. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.

    PubMed

    Aronov, Dmitriy; Tank, David W

    2014-10-22

    Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.

  5. Course Design and Student Responses to an Online PBL Course in 3D Modelling for Mining Engineers

    ERIC Educational Resources Information Center

    McAlpine, Iain; Stothard, Phillip

    2005-01-01

    To enhance a course in 3D Virtual Reality (3D VR) modelling for mining engineers, and to create the potential for off campus students to fully engage with the course, a problem based learning (PBL) approach was applied to the course design and all materials and learning activities were provided online. This paper outlines some of the theoretical…

  6. Development of a low-cost virtual reality workstation for training and education

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) it involves 3-dimensional computer graphics; (2) it includes real-time feedback and response to user actions; and (3) it must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, but the high cost of VR technology has limited its practical application to fields with big budgets, such as military combat simulation, commercial pilot training, and certain projects within the space program. However, in the last year there has been a revolution in the cost of VR technology. The speed of inexpensive personal computers has increased dramatically, especially with the introduction of the Pentium processor and the PCI bus for IBM-compatibles, and the cost of high-quality virtual reality peripherals has plummeted. The result is that many public schools, colleges, and universities can afford a PC-based workstation capable of running immersive virtual reality applications. My goal this summer was to assemble and evaluate such a system.

  7. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  8. ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation.

  9. The construction of tridimensional representation of body and external reality in man. The greatest achievement of evolution to date implications for virtual reality.

    PubMed

    Woodbury, M A; Woodbury, M F

    1998-01-01

    Our 3-D Body Representation constructed during development by our Central Nervous System under the direction of our DNA, consists of a holographic representation arising from sensory input in the cerebellum and projected extraneurally in the brain ventricular fluid which has the chemical structure of liquid crystal. The structure of 3-D holographic Body Representation is then extrapolated by such cognitive instruments as boundarization, geometrization and gestalt organization upon the external environment which is perceived consequently as three dimensional. When the Body Representation collapses as in psychotic panic states. patients become terrified as they suddenly lose the perception of themselves and the world around them as three dimensional, solid in a reliably solid environment but feel suddenly that they are no longer a person but a disorganized blob. In our clinical practice we found serendipitously that the structure of three dimensionality can be restored even without medication by techniques involving stimulation of the body sensory system in the presence of a benevolent psychotherapist. Implications for Virtual Reality will be discussed.

  10. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  11. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  12. D Survey and Augmented Reality for Cultural Heritage. The Case Study of Aurelian Wall at Castra Praetoria in Rome

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Conigliaro, E.; Del Grasso, M.; Papalini, P.; Saccone, M.

    2016-06-01

    The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented "Augment") it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.

  13. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  14. Virtual reality as a new trend in mechanical and electrical engineering education

    NASA Astrophysics Data System (ADS)

    Kamińska, Dorota; Sapiński, Tomasz; Aitken, Nicola; Rocca, Andreas Della; Barańska, Maja; Wietsma, Remco

    2017-12-01

    In their daily practice, academics frequently face lack of access to modern equipment and devices, which are currently in use on the market. Moreover, many students have problems with understanding issues connected to mechanical and electrical engineering due to the complexity, necessity of abstract thinking and the fact that those concepts are not fully tangible. Many studies indicate that virtual reality can be successfully used as a training tool in various domains, such as development, health-care, the military or school education. In this paper, an interactive training strategy for mechanical and electrical engineering education shall be proposed. The prototype of the software consists of a simple interface, meaning it is easy for comprehension and use. Additionally, the main part of the prototype allows the user to virtually manipulate a 3D object that should be analyzed and studied. Initial studies indicate that the use of virtual reality can contribute to improving the quality and efficiency of higher education, as well as qualifications, competencies and the skills of graduates, and increase their competitiveness in the labour market.

  15. Building Virtual Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Goddard, C.

    2017-12-01

    Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.

  16. Modulation of cortical activity in 2D versus 3D virtual reality environments: an EEG study.

    PubMed

    Slobounov, Semyon M; Ray, William; Johnson, Brian; Slobounov, Elena; Newell, Karl M

    2015-03-01

    There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  18. Building a virtual archive using brain architecture and Web 3D to deliver neuropsychopharmacology content over the Internet.

    PubMed

    Mongeau, R; Casu, M A; Pani, L; Pillolla, G; Lianas, L; Giachetti, A

    2008-05-01

    The vast amount of heterogeneous data generated in various fields of neurosciences such as neuropsychopharmacology can hardly be classified using traditional databases. We present here the concept of a virtual archive, spatially referenced over a simplified 3D brain map and accessible over the Internet. A simple prototype (available at http://aquatics.crs4.it/neuropsydat3d) has been realized using current Web-based virtual reality standards and technologies. It illustrates how primary literature or summary information can easily be retrieved through hyperlinks mapped onto a 3D schema while navigating through neuroanatomy. Furthermore, 3D navigation and visualization techniques are used to enhance the representation of brain's neurotransmitters, pathways and the involvement of specific brain areas in any particular physiological or behavioral functions. The system proposed shows how the use of a schematic spatial organization of data, widely exploited in other fields (e.g. Geographical Information Systems) can be extremely useful to develop efficient tools for research and teaching in neurosciences.

  19. A Discussion of Virtual Reality As a New Tool for Training Healthcare Professionals.

    PubMed

    Fertleman, Caroline; Aubugeau-Williams, Phoebe; Sher, Carmel; Lim, Ai-Nee; Lumley, Sophie; Delacroix, Sylvie; Pan, Xueni

    2018-01-01

    Virtual reality technology is an exciting and emerging field with vast applications. Our study sets out the viewpoint that virtual reality software could be a new focus of direction in the development of training tools in medical education. We carried out a panel discussion at the Center for Behavior Change 3rd Annual Conference, prompted by the study, "The Responses of Medical General Practitioners to Unreasonable Patient Demand for Antibiotics--A Study of Medical Ethics Using Immersive Virtual Reality" (1). In Pan et al.'s study, 21 general practitioners (GPs) and GP trainees took part in a videoed, 15-min virtual reality scenario involving unnecessary patient demands for antibiotics. This paper was discussed in-depth at the Center for Behavior Change 3rd Annual Conference; the content of this paper is a culmination of findings and feedback from the panel discussion. The experts involved have backgrounds in virtual reality, general practice, medicines management, medical education and training, ethics, and philosophy. Virtual reality is an unexplored methodology to instigate positive behavioral change among clinicians where other methods have been unsuccessful, such as antimicrobial stewardship. There are several arguments in favor of use of virtual reality in medical education: it can be used for "difficult to simulate" scenarios and to standardize a scenario, for example, for use in exams. However, there are limitations to its usefulness because of the cost implications and the lack of evidence that it results in demonstrable behavior change.

  20. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  1. [Real patients in virtual reality: the link between phantom heads and clinical dentistry].

    PubMed

    Serrano, C M; Wesselink, P R; Vervoorn, J M

    2018-05-01

    Preclinical training in phantom heads has until now been considered the 'gold standard' for restorative dental education, but the transition from preclinic to the treatment of real patients has remained a challenge. With the introduction of the latest generation of virtual reality simulators, students and dental practitioners can make digital impressions of their patients in virtual reality models and practice procedures in virtual reality before clinically performing them. In this way, clinical decisions can be investigated and practiced prior to actual treatment, enhancing the safety of the treatment and the self-confidence to perform it. With the 3M™ True Definition Scanner and the Moog Simodont Dental Trainer, 3 masters students and a general dental practitioner practiced their procedures in virtual reality prior to performing them on real patients. They were very satisfied with this preparation and the result of the treatment.

  2. The use of strain gauge platform and virtual reality tool for patient stability examination

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech; Wysk, Lukasz; Skoczylas, Marcin

    2016-09-01

    Virtual reality is one of the fastest growing information technologies. This paper is only a prelude to a larger study on the use of virtual reality tools in analysing bony labyrinth and sense of balance. Problems with the functioning of these areas of the body are a controversial topic in debate among specialists. The result of still unresolved imbalance treatments is a constant number of people reporting this type of ailment. Considering above, authors created a system and application that contains a model of virtual environment, and a tool for the modification of the obstacles in 3D space. Preliminary studies of patients from a test group aged 22-49 years were also carried out, in which behaviour and sense of balance in relation to the horizontal curvature of the virtual world around patient has been analysed. Experiments carried out on a test group showed that the shape of the curve and the virtual world space and age of patient has a major impact on a sense of balance. The data obtained can be linked with actual disorders of bony labyrinth and human behaviour at the time of their occurrence. Another important achievement that will be the subject of further work is possible use a modified version of the software for rehabilitation purposes.

  3. Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.

    2016-06-01

    The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.

  4. Envisioning the future of home care: applications of immersive virtual reality.

    PubMed

    Brennan, Patricia Flatley; Arnott Smith, Catherine; Ponto, Kevin; Radwin, Robert; Kreutz, Kendra

    2013-01-01

    Accelerating the design of technologies to support health in the home requires 1) better understanding of how the household context shapes consumer health behaviors and (2) the opportunity to afford engineers, designers, and health professionals the chance to systematically study the home environment. We developed the Living Environments Laboratory (LEL) with a fully immersive, six-sided virtual reality CAVE to enable recreation of a broad range of household environments. We have successfully developed a virtual apartment, including a kitchen, living space, and bathroom. Over 2000 people have visited the LEL CAVE. Participants use an electronic wand to activate common household affordances such as opening a refrigerator door or lifting a cup. Challenges currently being explored include creating natural gesture to interface with virtual objects, developing robust, simple procedures to capture actual living environments and rendering them in a 3D visualization, and devising systematic stable terminologies to characterize home environments.

  5. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics.

    PubMed

    Ding, Yichen; Abiri, Arash; Abiri, Parinaz; Li, Shuoran; Chang, Chih-Chiang; Baek, Kyung In; Hsu, Jeffrey J; Sideris, Elias; Li, Yilei; Lee, Juhyun; Segura, Tatiana; Nguyen, Thao P; Bui, Alexander; Sevag Packard, René R; Fei, Peng; Hsiai, Tzung K

    2017-11-16

    Currently, there is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3D architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3D and 4D (3D spatial + 1D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods, such as routine optical microscopes. We hereby demonstrate multiscale applicability of VR-LSFM to (a) interrogate skin fibroblasts interacting with a hyaluronic acid-based hydrogel, (b) navigate through the endocardial trabecular network during zebrafish development, and (c) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation algorithm with deformable image registration to interface a VR environment with imaging computation for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution.

  6. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics

    PubMed Central

    Ding, Yichen; Abiri, Arash; Abiri, Parinaz; Li, Shuoran; Chang, Chih-Chiang; Hsu, Jeffrey J.; Sideris, Elias; Li, Yilei; Lee, Juhyun; Segura, Tatiana; Nguyen, Thao P.; Bui, Alexander; Sevag Packard, René R.; Hsiai, Tzung K.

    2017-01-01

    Currently, there is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3D architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3D and 4D (3D spatial + 1D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods, such as routine optical microscopes. We hereby demonstrate multiscale applicability of VR-LSFM to (a) interrogate skin fibroblasts interacting with a hyaluronic acid–based hydrogel, (b) navigate through the endocardial trabecular network during zebrafish development, and (c) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation algorithm with deformable image registration to interface a VR environment with imaging computation for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution. PMID:29202458

  7. Short-term motor learning through non-immersive virtual reality task in individuals with down syndrome.

    PubMed

    de Mello Monteiro, Carlos Bandeira; da Silva, Talita Dias; de Abreu, Luiz Carlos; Fregni, Felipe; de Araujo, Luciano Vieira; Ferreira, Fernando Henrique Inocêncio Borba; Leone, Claudio

    2017-04-14

    Down syndrome (DS) has unique physical, motor and cognitive characteristics. Despite cognitive and motor difficulties, there is a possibility of intervention based on the knowledge of motor learning. However, it is important to study the motor learning process in individuals with DS during a virtual reality task to justify the use of virtual reality to organize intervention programs. The aim of this study was to analyze the motor learning process in individuals with DS during a virtual reality task. A total of 40 individuals participated in this study, 20 of whom had DS (24 males and 8 females, mean age of 19 years, ranging between 14 and 30 yrs.) and 20 typically developing individuals (TD) who were matched by age and gender to the individuals with DS. To examine this issue, we used software that uses 3D images and reproduced a coincidence-timing task. The results showed that all individuals improved performance in the virtual task, but the individuals with DS that started the task with worse performance showed higher difference from the beginning. Besides that, they were able to retain and transfer the performance with increase of speed of the task. Individuals with DS are able to learn movements from virtual tasks, even though the movement time was higher compared to the TD individuals. The results showed that individuals with DS who started with low performance improved coincidence- timing task with virtual objects, but were less accurate than typically developing individuals. ClinicalTrials.gov Identifier: NCT02719600 .

  8. Virtual reality in the operating room of the future.

    PubMed

    Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R

    1997-01-01

    In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.

  9. Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction

    NASA Astrophysics Data System (ADS)

    Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.

    2018-05-01

    Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.

  10. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  11. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  12. New directions in the use of virtual reality for food shopping: marketing and education perspectives.

    PubMed

    Ruppert, Barb

    2011-03-01

    Virtual reality is used in marketing research to shape food selection and purchase decisions. Could it be used to counteract the marketing of less-nutritious foods and teach healthier food selection? This article presents interviews with Raymond Burke, Ph.D., of Indiana University Bloomington, and Rachel Jones, M.P.H., of the University of Utah College of Health. Topics covered include new marketing research technologies, including virtual reality simulations; retailing and shopper behavior; and the use of virtual grocery stores to help students explore quality of diet and food/nutrient relationships. The interviewees discuss how the technologies they have developed fit into research and behavior change related to obesity and diabetes. © 2011 Diabetes Technology Society.

  13. New Directions in the Use of Virtual Reality for Food Shopping: Marketing and Education Perspectives

    PubMed Central

    Ruppert, Barb

    2011-01-01

    Virtual reality is used in marketing research to shape food selection and purchase decisions. Could it be used to counteract the marketing of less-nutritious foods and teach healthier food selection? This article presents interviews with Raymond Burke, Ph.D., of Indiana University Bloomington, and Rachel Jones, M.P.H., of the University of Utah College of Health. Topics covered include new marketing research technologies, including virtual reality simulations; retailing and shopper behavior; and the use of virtual grocery stores to help students explore quality of diet and food/nutrient relationships. The interviewees discuss how the technologies they have developed fit into research and behavior change related to obesity and diabetes. PMID:21527099

  14. Visualizing planetary data by using 3D engines

    NASA Astrophysics Data System (ADS)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  15. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  16. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  17. Future directions for the development of virtual reality within an automotive manufacturer.

    PubMed

    Lawson, Glyn; Salanitri, Davide; Waterfield, Brian

    2016-03-01

    Virtual Reality (VR) can reduce time and costs, and lead to increases in quality, in the development of a product. Given the pressure on car companies to reduce time-to-market and to continually improve quality, the automotive industry has championed the use of VR across a number of applications, including design, manufacturing, and training. This paper describes interviews with 11 engineers and employees of allied disciplines from an automotive manufacturer about their current physical and virtual properties and processes. The results guided a review of research findings and scientific advances from the academic literature, which formed the basis of recommendations for future developments of VR technologies and applications. These include: develop a greater range of virtual contexts; use multi-sensory simulation; address perceived differences between virtual and real cars; improve motion capture capabilities; implement networked 3D technology; and use VR for market research. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. 3D-Lab: a collaborative web-based platform for molecular modeling.

    PubMed

    Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas

    2016-09-01

    The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.

  19. [Value of laparoscopic virtual reality simulator in laparoscopic suture ability training of catechumen].

    PubMed

    Cai, Jian-liang; Zhang, Yi; Sun, Guo-feng; Li, Ning-chen; Zhang, Xiang-hua; Na, Yan-qun

    2012-12-01

    To investigate the value of laparoscopic virtual reality simulator in laparoscopic suture ability training of catechumen. After finishing the virtual reality training of basic laparoscopic skills, 26 catechumen were divided randomly into 2 groups, one group undertook advanced laparoscopic skill (suture technique) training with laparoscopic virtual reality simulator (virtual group), another used laparoscopic box trainer (box group). Using our homemade simulations, before grouping and after training, every trainee performed nephropyeloureterostomy under laparoscopy, the running time, anastomosis quality and proficiency were recorded and assessed. For virtual group, the running time, anastomosis quality and proficiency scores before grouping were (98 ± 11) minutes, 3.20 ± 0.41, 3.47 ± 0.64, respectively, after training were (53 ± 8) minutes, 6.87 ± 0.74, 6.33 ± 0.82, respectively, all the differences were statistically significant (all P < 0.01). In box group, before grouping were (98 ± 10) minutes, 3.17 ± 0.39, 3.42 ± 0.67, respectively, after training were (52 ± 9) minutes, 6.08 ± 0.90, 6.33 ± 0.78, respectively, all the differences also were statistically significant (all P < 0.01). After training, the running time and proficiency scores of virtual group were similar to box group (all P > 0.05), however, anstomosis quality scores in virtual group were higher than in box group (P = 0.02). The laparoscopic virtual reality simulator is better than traditional box trainer in advanced laparoscopic suture ability training of catechumen.

  20. Radiofrequency ablation of hepatic tumors: simulation, planning, and contribution of virtual reality and haptics.

    PubMed

    Villard, Caroline; Soler, Luc; Gangi, Afshin

    2005-08-01

    For radiofrequency ablation (RFA) of liver tumors, evaluation of vascular architecture, post-RFA necrosis prediction, and the choice of a suitable needle placement strategy using conventional radiological techniques remain difficult. In an attempt to enhance the safety of RFA, a 3D simulator, treatment planning, and training tool, that simulates the insertion of the needle, the necrosis of the treated area, and proposes an optimal needle placement, has been developed. The 3D scenes are automatically reconstructed from enhanced spiral CT scans. The simulator takes into account the cooling effect of local vessels greater than 3 mm in diameter, making necrosis shapes more realistic. Optimal needle positioning can be automatically generated by the software to produce complete destruction of the tumor, with maximum respect of the healthy liver and of all major structures to avoid. We also studied how the use of virtual reality and haptic devices are valuable to make simulation and training realistic and effective.

  1. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  2. DJINNI: A Novel Technology Supported Exposure Therapy Paradigm for SAD Combining Virtual Reality and Augmented Reality.

    PubMed

    Ben-Moussa, Maher; Rubo, Marius; Debracque, Coralie; Lange, Wolf-Gero

    2017-01-01

    The present paper explores the benefits and the capabilities of various emerging state-of-the-art interactive 3D and Internet of Things technologies and investigates how these technologies can be exploited to develop a more effective technology supported exposure therapy solution for social anxiety disorder. "DJINNI" is a conceptual design of an in vivo augmented reality (AR) exposure therapy mobile support system that exploits several capturing technologies and integrates the patient's state and situation by vision-based, audio-based, and physiology-based analysis as well as by indoor/outdoor localization techniques. DJINNI also comprises an innovative virtual reality exposure therapy system that is adaptive and customizable to the demands of the in vivo experience and therapeutic progress. DJINNI follows a gamification approach where rewards and achievements are utilized to motivate the patient to progress in her/his treatment. The current paper reviews the state of the art of technologies needed for such a solution and recommends how these technologies could be integrated in the development of an individually tailored and yet feasible and effective AR/virtual reality-based exposure therapy. Finally, the paper outlines how DJINNI could be part of classical cognitive behavioral treatment and how to validate such a setup.

  3. DJINNI: A Novel Technology Supported Exposure Therapy Paradigm for SAD Combining Virtual Reality and Augmented Reality

    PubMed Central

    Ben-Moussa, Maher; Rubo, Marius; Debracque, Coralie; Lange, Wolf-Gero

    2017-01-01

    The present paper explores the benefits and the capabilities of various emerging state-of-the-art interactive 3D and Internet of Things technologies and investigates how these technologies can be exploited to develop a more effective technology supported exposure therapy solution for social anxiety disorder. “DJINNI” is a conceptual design of an in vivo augmented reality (AR) exposure therapy mobile support system that exploits several capturing technologies and integrates the patient’s state and situation by vision-based, audio-based, and physiology-based analysis as well as by indoor/outdoor localization techniques. DJINNI also comprises an innovative virtual reality exposure therapy system that is adaptive and customizable to the demands of the in vivo experience and therapeutic progress. DJINNI follows a gamification approach where rewards and achievements are utilized to motivate the patient to progress in her/his treatment. The current paper reviews the state of the art of technologies needed for such a solution and recommends how these technologies could be integrated in the development of an individually tailored and yet feasible and effective AR/virtual reality-based exposure therapy. Finally, the paper outlines how DJINNI could be part of classical cognitive behavioral treatment and how to validate such a setup. PMID:28503155

  4. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  5. Proof of concept : examining characteristics of roadway infrastructure in various 3D visualization modes.

    DOT National Transportation Integrated Search

    2015-02-01

    Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at : demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, inter...

  6. Photorealistic virtual anatomy based on Chinese Visible Human data.

    PubMed

    Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y

    2006-04-01

    Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.

  7. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  8. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.

    PubMed

    Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir

    2018-04-01

    Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.

  9. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  10. Virtual reality training for health-care professionals.

    PubMed

    Mantovani, Fabrizia; Castelnuovo, Gianluca; Gaggioli, Andrea; Riva, Giuseppe

    2003-08-01

    Emerging changes in health-care delivery are having a significant impact on the structure of health-care professionals' education. Today it is recognized that medical knowledge doubles every 6-8 years, with new medical procedures emerging everyday. While the half-life of medical information is so short, the average physician practices 30 years and the average nurse 40 years. Continuing education thus represents an important challenge to face. Recent advances in educational technology are offering an increasing number of innovative learning tools. Among these, Virtual Reality represents a promising area with high potential of enhancing the training of health-care professionals. Virtual Reality Training can provide a rich, interactive, engaging educational context, thus supporting experiential learning-by-doing; it can, in fact, contribute to raise interest and motivation in trainees and to effectively support skills acquisition and transfer, since the learning process can be settled within an experiential framework. Current virtual training applications for health-care differ a lot as to both their technological/multimedia sophistication and to the types of skills trained, varying for example from telesurgical applications to interactive simulations of human body and brain, to virtual worlds for emergency training. Other interesting applications include the development of immersive 3D environments for training psychiatrists and psychologists in the treatment of mental disorders. This paper has the main aim of discussing the rationale and main benefits for the use of virtual reality in health-care education and training. Significant research and projects carried out in this field will also be presented, followed by discussion on key issues concerning current limitations and future development directions.

  11. Asymmetric training using virtual reality reflection equipment and the enhancement of upper limb function in stroke patients: a randomized controlled trial.

    PubMed

    Lee, DongJin; Lee, MyungMo; Lee, KyoungJin; Song, ChangHo

    2014-07-01

    Asymmetric movements with both hands contributed to the improvement of spatially coupled motion. Thus, the aim of this study was to investigate the effects of an asymmetric training program using virtual reality reflection equipment on upper limb function in stroke patients. Twenty-four stroke patients were randomly allocated to an experimental group (n=12) or a control group (n=12). Both groups participated in conventional physical therapy for 2×30 min/d, 5 d/wk, for 4 weeks. The experimental group also participated in an asymmetric training program using virtual reality reflection equipment, and the control group participated in a symmetric training program. Both asymmetric and symmetric programs were conducted for 30 min/d, 5 d/wk, for 4 weeks. To compare upper limb function before and after intervention, the Fugl-Meyer Assessment (FMA), the Box and Block Test (BBT), grip strength, range of motion (ROM), and spasticity were assessed. Both groups showed significant increases in upper limb function, excepting spasticity, after intervention (P<.05, 1-way repeated-measures analysis of variance [ANOVA]). A significant group-time interaction was demonstrated only for shoulder/elbow/wrist items of FMA, BBT, grip strength, and ROM of wrist flexion, extension, and ulnar deviation (P<.05, 2-way repeated-measures ANOVA). This study confirms that the asymmetric training program using virtual reality reflection equipment is an effective intervention method for improving upper limb function in stroke patients. We consider that an additional study based on a program using virtual reflection, which is more functional than performing simple tasks, and consisting of tasks relevant to the activities of daily living be conducted. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  12. A collaborative virtual reality environment for neurosurgical planning and training.

    PubMed

    Kockro, Ralf A; Stadie, Axel; Schwandt, Eike; Reisch, Robert; Charalampaki, Cleopatra; Ng, Ivan; Yeo, Tseng Tsai; Hwang, Peter; Serra, Luis; Perneczky, Axel

    2007-11-01

    We have developed a highly interactive virtual environment that enables collaborative examination of stereoscopic three-dimensional (3-D) medical imaging data for planning, discussing, or teaching neurosurgical approaches and strategies. The system consists of an interactive console with which the user manipulates 3-D data using hand-held and tracked devices within a 3-D virtual workspace and a stereoscopic projection system. The projection system displays the 3-D data on a large screen while the user is working with it. This setup allows users to interact intuitively with complex 3-D data while sharing this information with a larger audience. We have been using this system on a routine clinical basis and during neurosurgical training courses to collaboratively plan and discuss neurosurgical procedures with 3-D reconstructions of patient-specific magnetic resonance and computed tomographic imaging data or with a virtual model of the temporal bone. Working collaboratively with the 3-D information of a large, interactive, stereoscopic projection provides an unambiguous way to analyze and understand the anatomic spatial relationships of different surgical corridors. In our experience, the system creates a unique forum for open and precise discussion of neurosurgical approaches. We believe the system provides a highly effective way to work with 3-D data in a group, and it significantly enhances teaching of neurosurgical anatomy and operative strategies.

  13. DJ Sim: a virtual reality DJ simulation game

    NASA Astrophysics Data System (ADS)

    Tang, Ka Yin; Loke, Mei Hwan; Chin, Ching Ling; Chua, Gim Guan; Chong, Jyh Herng; Manders, Corey; Khan, Ishtiaq Rasool; Yuan, Miaolong; Farbiz, Farzam

    2009-02-01

    This work describes the process of developing a 3D Virtual Reality (VR) DJ simulation game intended to be displayed on a stereoscopic display. Using a DLP projector and shutter glasses, the user of the system plays a game in which he or she is a DJ in a night club. The night club's music is playing, and the DJ is "scratching" in correspondence to this music. Much in the flavor of Guitar Hero or Dance Dance Revolution, a virtual turntable is manipulated to project information about how the user should perform. The user only needs a small set of hand gestures, corresponding to the turntable scratch movements to play the game. As the music plays, a series of moving arrows approaching the DJ's turntable instruct the user as to when and how to perform the scratches.

  14. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  15. 3D force/torque characterization of emergency cricothyroidotomy procedure using an instrumented scalpel.

    PubMed

    Ryason, Adam; Sankaranarayanan, Ganesh; Butler, Kathryn L; DeMoya, Marc; De, Suvranu

    2016-08-01

    Emergency Cricothyroidotomy (CCT) is a surgical procedure performed to secure a patient's airway. This high-stakes, but seldom-performed procedure is an ideal candidate for a virtual reality simulator to enhance physician training. For the first time, this study characterizes the force/torque characteristics of the cricothyroidotomy procedure, to guide development of a virtual reality CCT simulator for use in medical training. We analyze the upper force and torque thresholds experienced at the human-scalpel interface. We then group individual surgical cuts based on style of cut and cut medium and perform a regression analysis to create two models that allow us to predict the style of cut performed and the cut medium.

  16. STS-120 crew along with Expedition crew members Dan Tani and Sandra Magnus

    NASA Image and Video Library

    2007-08-09

    JSC2007-E-41532 (9 Aug. 2007) --- Astronaut Stephanie D. Wilson, STS-120 mission specialist, uses the virtual reality lab at Johnson Space Center to train for her duties aboard the space shuttle and space station. This type of computer interface, paired with virtual reality training hardware and software, helps to prepare the entire team for dealing with space station elements.

  17. From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy

    NASA Astrophysics Data System (ADS)

    Jang, Susan

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity that took place among the manipulation group promoted visual and motoric embodiment, which in turn enhanced learning. Moreover, when accounting for spatial ability, it was found that manipulation benefits students with low spatial ability more than students with high spatial ability.

  18. Virtual Reality and Its Potential Use in Special Education. Identifying Emerging Issues and Trends in Technology for Special Education.

    ERIC Educational Resources Information Center

    Woodward, John

    As part of a 3-year study to identify emerging issues and trends in technology for special education, this paper addresses the possible contributions of virtual reality technology to educational services for students with disabilities. An example of the use of virtual reality in medical imaging introduces the paper and leads to a brief review of…

  19. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less

  20. Effects of 3D Virtual Simulators in the Introductory Wind Energy Course: A Tool for Teaching Engineering Concepts

    DOE PAGES

    Do, Phuong T.; Moreland, John R.; Delgado, Catherine; ...

    2013-01-01

    Our research provides an innovative solution for optimizing learning effectiveness and improving postsecondary education through the development of virtual simulators that can be easily used and integrated into existing wind energy curriculum. Two 3D virtual simulators are developed in our laboratory for use in an immersive 3D virtual reality (VR) system or for 3D display on a 2D screen. Our goal is to apply these prototypical simulators to train postsecondary students and professionals in wind energy education; and to offer experiential learning opportunities in 3D modeling, simulation, and visualization. The issue of transferring learned concepts to practical applications is amore » widespread problem in postsecondary education. Related to this issue is a critical demand to educate and train a generation of professionals for the wind energy industry. With initiatives such as the U.S. Department of Energy's “20% Wind Energy by 2030” outlining an exponential increase of wind energy capacity over the coming years, revolutionary educational reform is needed to meet the demand for education in the field of wind energy. These developments and implementation of Virtual Simulators and accompanying curriculum will propel national reforms, meeting the needs of the wind energy industrial movement and addressing broader educational issues that affect a number of disciplines.« less

  1. Embryonic delay in growth and development related to confined placental trisomy 16 mosaicism, diagnosed by I-Space Virtual Reality.

    PubMed

    Verwoerd-Dikkeboom, Christine M; van Heesch, Peter N A C M; Koning, Anton H J; Galjaard, Robert-Jan H; Exalto, Niek; Steegers, Eric A P

    2008-11-01

    To demonstrate the use of a novel three-dimensional (3D) virtual reality (VR) system in the visualization of first trimester growth and development in a case of confined placental trisomy 16 mosaicism (CPM+16). Case report. Prospective study on first trimester growth using a 3D VR system. A 34-year-old gravida 1, para 0 was seen weekly in the first trimester for 3D ultrasound examinations. Chorionic villus sampling was performed because of an enlarged nuchal translucency (NT) measurement and low pregnancy-associated plasma protein-A levels, followed by amniocentesis. Amniocentesis revealed a CPM+16. On two-dimensional (2D) and 3D ultrasound no structural anomalies were found with normal fetal Dopplers. Growth remained below the 2.3 percentile. At 37 weeks, a female child of 2010 g (<2.5 percentile) was born. After birth, growth climbed to the 50th percentile in the first 2 months. The I-Space VR system provided information about phenotypes not obtainable by standard 2D ultrasound. In this case, the delay in growth and development could be observed very early in pregnancy. Since first trimester screening programs are still improving and becoming even more important, systems such as the I-Space open a new era for in vivo studies on the physiologic and pathologic processes involved in embryogenesis.

  2. Developing an Augmented Reality Environment for Earth Science Education

    NASA Astrophysics Data System (ADS)

    Pratt, M. J.; Skemer, P. A.; Arvidson, R. E.

    2017-12-01

    The emerging field of augmented reality (AR) provides new and exciting ways to explore geologic phenomena for research and education. The primary advantage of AR is that it allows users to physically explore complex three-dimensional structures that were previously inaccessible, for example a remote geologic outcrop or a mineral structure at the atomic scale. It is used, for example, with OnSight software during tactical operations to plan the Mars Curiosity rover's traverses by providing virtual views to walk through terrain and the rover at true scales. This mode of physical exploration allows users more freedom to investigate and understand the 3D structure than is possible on a flat computer screen, or within a static PowerPoint presentation during a classroom lecture. The Microsoft HoloLens headset provides the most-advanced, mobile AR platform currently available to developers. The Fossett Laboratory for Virtual Planetary Exploration at Washington University in St. Louis has applied this technology, coupled with photogrammetric software and the Unity 3D gaming engine, to develop photorealistic environments of 3D geologic outcrops from around the world. The untethered HoloLens provides an ideal platform for a classroom setting as it allows for shared experiences of the holograms of interest, projecting them in the same location for all users to explore. Furthermore, the HoloLens allows for face-to-face communication during use that is important in teaching, a feature that virtual reality does not allow. Our development of an AR application includes the design of an online database of photogrammetric outcrop models curated for the current limitations of AR technology. This database will be accessible to both those wishing to submit models, and is free to those wishing to use the application for teaching, outreach or research purposes.

  3. Evaluation of a haptics-based virtual reality temporal bone simulator for anatomy and surgery training.

    PubMed

    Fang, Te-Yung; Wang, Pa-Chun; Liu, Chih-Hsien; Su, Mu-Chun; Yeh, Shih-Ching

    2014-02-01

    Virtual reality simulation training may improve knowledge of anatomy and surgical skills. We evaluated a 3-dimensional, haptic, virtual reality temporal bone simulator for dissection training. The subjects were 7 otolaryngology residents (3 training sessions each) and 7 medical students (1 training session each). The virtual reality temporal bone simulation station included a computer with software that was linked to a force-feedback hand stylus, and the system recorded performance and collisions with vital anatomic structures. Subjects performed virtual reality dissections and completed questionnaires after the training sessions. Residents and students had favorable responses to most questions of the technology acceptance model (TAM) questionnaire. The average TAM scores were above neutral for residents and medical students in all domains, and the average TAM score for residents was significantly higher for the usefulness domain and lower for the playful domain than students. The average satisfaction questionnaire for residents showed that residents had greater overall satisfaction with cadaver temporal bone dissection training than training with the virtual reality simulator or plastic temporal bone. For medical students, the average comprehension score was significantly increased from before to after training for all anatomic structures. Medical students had significantly more collisions with the dura than residents. The residents had similar mean performance scores after the first and third training sessions for all dissection procedures. The virtual reality temporal bone simulator provided satisfactory training for otolaryngology residents and medical students. Copyright © 2013. Published by Elsevier Ireland Ltd.

  4. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.

    PubMed

    Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques

    2015-04-01

    Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.

  5. Exploring Virtual Reality for Classroom Use: The Virtual Reality and Education Lab at East Carolina University.

    ERIC Educational Resources Information Center

    Auld, Lawrence W. S.; Pantelidis, Veronica S.

    1994-01-01

    Describes the Virtual Reality and Education Lab (VREL) established at East Carolina University to study the implications of virtual reality for elementary and secondary education. Highlights include virtual reality software evaluation; hardware evaluation; computer-based curriculum objectives which could use virtual reality; and keeping current…

  6. Incidental memory and navigation in panoramic virtual reality for electronic commerce.

    PubMed

    Howes, A; Miles, G E; Payne, S J; Mitchell, C D; Davies, A J

    2001-01-01

    Recently much effort has been dedicated to designing and implementing World Wide Web sites for virtual shopping and e-commerce. Despite this effort, relatively little empirical work has been done to determine the effectiveness with which different site designs sell products. We report three experiments in which participants were asked to search for products in various experimental e-commerce sites. Across the experiments participants were asked to search in either QTVR (QuickTime Virtual Reality), hypertext, or pictorially rich hypertext environments; they were then tested for their ability to recall the products seen and to recognize product locations. The experiments demonstrated that when using QTVR (Experiments 1, 2, and 3) or pictorial environments (Experiment 2), participants retained more information about products that were incidental to their goals. In two of the experiments it was shown that participants navigated more efficiently when using a QTVR environment. The costs and benefits of using 3D virtual environments for on-line shops are discussed. Actual or potential applications of this research include support for the development of e-commerce design guidelines.

  7. Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment.

    PubMed

    Kockro, Ralf A; Amaxopoulou, Christina; Killeen, Tim; Wagner, Wolfgang; Reisch, Robert; Schwandt, Eike; Gutenberg, Angelika; Giese, Alf; Stofft, Eckart; Stadie, Axel T

    2015-09-01

    Three-dimensional (3D) computer graphics are increasingly used to supplement the teaching of anatomy. While most systems consist of a program which produces 3D renderings on a workstation with a standard screen, the Dextrobeam virtual reality VR environment allows the presentation of spatial neuroanatomical models to larger groups of students through a stereoscopic projection system. Second-year medical students (n=169) were randomly allocated to receive a standardised pre-recorded audio lecture detailing the anatomy of the third ventricle accompanied by either a two-dimensional (2D) PowerPoint presentation (n=80) or a 3D animated tour of the third ventricle with the DextroBeam. Students completed a 10-question multiple-choice exam based on the content learned and a subjective evaluation of the teaching method immediately after the lecture. Students in the 2D group achieved a mean score of 5.19 (±2.12) compared to 5.45 (±2.16) in the 3D group, with the results in the 3D group statistically non-inferior to those of the 2D group (p<0.0001). The students rated the 3D method superior to 2D teaching in four domains (spatial understanding, application in future anatomy classes, effectiveness, enjoyableness) (p<0.01). Stereoscopically enhanced 3D lectures are valid methods of imparting neuroanatomical knowledge and are well received by students. More research is required to define and develop the role of large-group VR systems in modern neuroanatomy curricula. Copyright © 2015 Elsevier GmbH. All rights reserved.

  8. Virtual Simulation in Enhancing Procedural Training for Fluoroscopy-guided Lumbar Puncture: A Pilot Study.

    PubMed

    Ali, Saad; Qandeel, Monther; Ramakrishna, Rishi; Yang, Carina W

    2018-02-01

    Fluoroscopy-guided lumbar puncture (FGLP) is a basic procedural component of radiology residency and neuroradiology fellowship training. Performance of the procedure with limited experience is associated with increased patient discomfort as well as increased radiation dose, puncture attempts, and complication rate. Simulation in health care is a developing field that has potential for enhancing procedural training. We demonstrate the design and utility of a virtual reality simulator for performing FGLP. An FGLP module was developed on an ImmersiveTouch platform, which digitally reproduces the procedural environment with a hologram-like projection. From computed tomography datasets of healthy adult spines, we constructed a 3-D model of the lumbar spine and overlying soft tissues. We assigned different physical characteristics to each tissue type, which the user can experience through haptic feedback while advancing a virtual spinal needle. Virtual fluoroscopy as well as 3-D images can be obtained for procedural planning and guidance. The number of puncture attempts, the distance to the target, the number of fluoroscopic shots, and the approximate radiation dose can be calculated. Preliminary data from users who participated in the simulation were obtained in a postsimulation survey. All users found the simulation to be a realistic replication of the anatomy and procedure and would recommend to a colleague. On a scale of 1-5 (lowest to highest) rating the virtual simulator training overall, the mean score was 4.3 (range 3-5). We describe the design of a virtual reality simulator for performing FGLP and present the initial experience with this new technique. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Virtual reality: Avatars in human spaceflight training

    NASA Astrophysics Data System (ADS)

    Osterlund, Jeffrey; Lawrence, Brad

    2012-02-01

    With the advancements in high spatial and temporal resolution graphics, along with advancements in 3D display capabilities to model, simulate, and analyze human-to-machine interfaces and interactions, the world of virtual environments is being used to develop everything from gaming, movie special affects and animations to the design of automobiles. The use of multiple object motion capture technology and digital human tools in aerospace has demonstrated to be a more cost effective alternative to the cost of physical prototypes, provides a more efficient, flexible and responsive environment to changes in the design and training, and provides early human factors considerations concerning the operation of a complex launch vehicle or spacecraft. United Space Alliance (USA) has deployed this technique and tool under Research and Development (R&D) activities on both spacecraft assembly and ground processing operations design and training on the Orion Crew Module. USA utilizes specialized products that were chosen based on functionality, including software and fixed based hardware (e.g., infrared and visible red cameras), along with cyber gloves to ensure fine motor dexterity of the hands. The key findings of the R&D were: mock-ups should be built to not obstruct cameras from markers being tracked; a mock-up toolkit be assembled to facilitate dynamic design changes; markers should be placed in accurate positions on humans and flight hardware to help with tracking; 3D models used in the virtual environment be striped of non-essential data; high computational capable workstations are required to handle the large model data sets; and Technology Interchange Meetings with vendors and other industries also utilizing virtual reality applications need to occur on a continual basis enabling USA to maintain its leading edge within this technology. Parameters of interest and benefit in human spaceflight simulation training that utilizes virtual reality technologies are to familiarize and assess operational processes, allow the ability to train virtually, experiment with "what if" scenarios, and expedite immediate changes to validate the design implementation are all parameters of interest in human spaceflight. Training benefits encompass providing 3D animation for post-training assessment, placement of avatars within 3D replicated work environments in assembling or processing hardware, offering various viewpoints of processes viewed and assessed giving the evaluators the ability to assess task feasibility and identify potential support equipment needs; and provide human factors determinations, such as reach, visibility, and accessibility. Multiple object motion capture technology provides an effective tool to train and assess ergonomic risks, simulations for determination of negative interactions between technicians and their proposed workspaces, and evaluation of spaceflight systems prior to, and as part of, the design process to contain costs and reduce schedule delays.

  10. Virtual Reality Technologies for Research and Education in Obesity and Diabetes: Research Needs and Opportunities

    PubMed Central

    Ershow, Abby G; Peterson, Charles M; Riley, William T; Rizzo, Albert “Skip”; Wansink, Brian

    2011-01-01

    The rising rates, high prevalence, and adverse consequences of obesity and diabetes call for new approaches to the complex behaviors needed to prevent and manage these conditions. Virtual reality (VR) technologies, which provide controllable, multisensory, interactive three-dimensional (3D) stimulus environments, are a potentially valuable means of engaging patients in interventions that foster more healthful eating and physical activity patterns. Furthermore, the capacity of VR technologies to motivate, record, and measure human performance represents a novel and useful modality for conducting research. This article summarizes background information and discussions for a joint July 2010 National Institutes of Health – Department of Defense workshop entitled Virtual Reality Technologies for Research and Education in Obesity and Diabetes. The workshop explored the research potential of VR technologies as tools for behavioral and neuroscience studies in diabetes and obesity, and the practical potential of VR in fostering more effective utilization of diabetes- and obesity-related nutrition and lifestyle information. Virtual reality technologies were considered especially relevant for fostering desirable health-related behaviors through motivational reinforcement, personalized teaching approaches, and social networking. Virtual reality might also be a means of extending the availability and capacity of health care providers. Progress in the field will be enhanced by further developing available platforms and taking advantage of VR’s capabilities as a research tool for well-designed hypothesis-testing behavioral science. Multidisciplinary collaborations are needed between the technology industry and academia, and among researchers in biomedical, behavioral, pedagogical, and computer science disciplines. Research priorities and funding opportunities for use of VR to improve prevention and management of obesity and diabetes can be found at agency websites (National Institutes of Health: http://grants.nih.gov/grants/guide/index.html; Department of Defense: www.tatrc.org). PMID:21527084

  11. Virtual reality technologies for research and education in obesity and diabetes: research needs and opportunities.

    PubMed

    Ershow, Abby G; Peterson, Charles M; Riley, William T; Rizzo, Albert Skip; Wansink, Brian

    2011-03-01

    The rising rates, high prevalence, and adverse consequences of obesity and diabetes call for new approaches to the complex behaviors needed to prevent and manage these conditions. Virtual reality (VR) technologies, which provide controllable, multisensory, interactive three-dimensional (3D) stimulus environments, are a potentially valuable means of engaging patients in interventions that foster more healthful eating and physical activity patterns. Furthermore, the capacity of VR technologies to motivate, record, and measure human performance represents a novel and useful modality for conducting research. This article summarizes background information and discussions for a joint July 2010 National Institutes of Health - Department of Defense workshop entitled Virtual Reality Technologies for Research and Education in Obesity and Diabetes. The workshop explored the research potential of VR technologies as tools for behavioral and neuroscience studies in diabetes and obesity, and the practical potential of VR in fostering more effective utilization of diabetes- and obesity-related nutrition and lifestyle information. Virtual reality technologies were considered especially relevant for fostering desirable health-related behaviors through motivational reinforcement, personalized teaching approaches, and social networking. Virtual reality might also be a means of extending the availability and capacity of health care providers. Progress in the field will be enhanced by further developing available platforms and taking advantage of VR's capabilities as a research tool for well-designed hypothesis-testing behavioral science. Multidisciplinary collaborations are needed between the technology industry and academia, and among researchers in biomedical, behavioral, pedagogical, and computer science disciplines. Research priorities and funding opportunities for use of VR to improve prevention and management of obesity and diabetes can be found at agency websites (National Institutes of Health: http://grants.nih.gov/grants/guide/index.html; Department of Defense: www.tatrc.org). © 2011 Diabetes Technology Society.

  12. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  13. Building simple multiscale visualizations of outcrop geology using virtual reality modeling language (VRML)

    NASA Astrophysics Data System (ADS)

    Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming

    2005-08-01

    Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.

  14. Effects of virtual reality-based training and task-oriented training on balance performance in stroke patients.

    PubMed

    Lee, Hyung Young; Kim, You Lim; Lee, Suk Min

    2015-06-01

    [Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training.

  15. Effects of virtual reality-based training and task-oriented training on balance performance in stroke patients

    PubMed Central

    Lee, Hyung Young; Kim, You Lim; Lee, Suk Min

    2015-01-01

    [Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training. PMID:26180341

  16. Avatar - a multi-sensory system for real time body position monitoring.

    PubMed

    Jovanov, E; Hanish, N; Courson, V; Stidham, J; Stinson, H; Webb, C; Denny, K

    2009-01-01

    Virtual reality and computer assisted physical rehabilitation applications require an unobtrusive and inexpensive real time monitoring systems. Existing systems are usually complex and expensive and based on infrared monitoring. In this paper we propose Avatar, a hybrid system consisting of off-the-shelf components and sensors. Absolute positioning of a few reference points is determined using infrared diode on subject's body and a set of Wii Remotes as optical sensors. Individual body segments are monitored by intelligent inertial sensor nodes iSense. A network of inertial nodes is controlled by a master node that serves as a gateway for communication with a capture device. Each sensor features a 3D accelerometer and a 2 axis gyroscope. Avatar system is used for control of avatars in Virtual Reality applications, but could be used in a variety of augmented reality, gaming, and computer assisted physical rehabilitation applications.

  17. Novel virtual reality system integrating online self-face viewing and mirror visual feedback for stroke rehabilitation: rationale and feasibility.

    PubMed

    Shiri, Shimon; Feintuch, Uri; Lorber-Haddad, Adi; Moreh, Elior; Twito, Dvora; Tuchner-Arieli, Maya; Meiner, Zeev

    2012-01-01

    To introduce the rationale of a novel virtual reality system based on self-face viewing and mirror visual feedback, and to examine its feasibility as a rehabilitation tool for poststroke patients. A novel motion capture virtual reality system integrating online self-face viewing and mirror visual feedback has been developed for stroke rehabilitation.The system allows the replacement of the impaired arm by a virtual arm. Upon making small movements of the paretic arm, patients view themselves virtually performing healthy full-range movements. A sample of 6 patients in the acute poststroke phase received the virtual reality treatment concomitantly with conservative rehabilitation treatment. Feasibility was assessed during 10 sessions for each participant. All participants succeeded in operating the system, demonstrating its feasibility in terms of adherence and improvement in task performance. Patients' performance within the virtual environment and a set of clinical-functional measures recorded before the virtual reality treatment, at 1 week, and after 3 months indicated neurological status and general functioning improvement. These preliminary results indicate that this newly developed virtual reality system is safe and feasible. Future randomized controlled studies are required to assess whether this system has beneficial effects in terms of enhancing upper limb function and quality of life in poststroke patients.

  18. G2H--graphics-to-haptic virtual environment development tool for PC's.

    PubMed

    Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L

    2000-01-01

    For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.

  19. Learning Application of Astronomy Based Augmented Reality using Android Platform

    NASA Astrophysics Data System (ADS)

    Maleke, B.; Paseru, D.; Padang, R.

    2018-02-01

    Astronomy is a branch of science involving observations of celestial bodies such as stars, planets, nebular comets, star clusters, and galaxies as well as natural phenomena occurring outside the Earth’s atmosphere. The way of learning of Astronomy is quite varied, such as by using a book or observe directly with a telescope. But both ways of learning have shortcomings, for example learning through books is only presented in the form of interesting 2D drawings. While learning with a telescope requires a fairly expensive cost to buy the equipment. This study will present a more interesting way of learning from the previous one, namely through Augmented Reality (AR) application using Android platform. Augmented Reality is a combination of virtual world (virtual) and real world (real) made by computer. Virtual objects can be text, animation, 3D models or videos that are combined with the actual environment so that the user feels the virtual object is in his environment. With the use of the Android platform, this application makes the learning method more interesting because it can be used on various Android smartphones so that learning can be done anytime and anywhere. The methodology used in making applications is Multimedia Lifecycle, along with C # language for AR programming and flowchart as a modelling tool. The results of research on some users stated that this application can run well and can be used as an alternative way of learning Astronomy with more interesting.

  20. Effects of Dental 3D Multimedia System on the Performance of Junior Dental Students in Preclinical Practice: A Report from China

    ERIC Educational Resources Information Center

    Hu, Jian; Yu, Hao; Shao, Jun; Li, Zhiyong; Wang, Jiawei; Wang, Yining

    2009-01-01

    Background: Computer-assisted tools are rarely adopted for dental education in China. In China, 3D digital technology, such as Virtual Reality Systems, are often rejected in the dental field due to prohibitive pricing. There is also a reluctance to move away from traditional patterns of dental education. Objective: The current study is one of a…

  1. The development, assessment and validation of virtual reality for human anatomy instruction

    NASA Technical Reports Server (NTRS)

    Marshall, Karen Benn

    1996-01-01

    This research project seeks to meet the objective of science training by developing, assessing, validating and utilizing VR as a human anatomy training medium. Current anatomy instruction is primarily in the form of lectures and usage of textbooks. In ideal situations, anatomic models, computer-based instruction, and cadaver dissection are utilized to augment traditional methods of instruction. At many institutions, lack of financial resources limits anatomy instruction to textbooks and lectures. However, human anatomy is three-dimensional, unlike the one-dimensional depiction found in textbooks and the two-dimensional depiction found on the computer. Virtual reality allows one to step through the computer screen into a 3-D artificial world. The primary objective of this project is to produce a virtual reality application of the abdominopelvic region of a human cadaver that can be taken back to the classroom. The hypothesis is that an immersive learning environment affords quicker anatomic recognition and orientation and a greater level of retention in human anatomy instruction. The goal is to augment not replace traditional modes of instruction.

  2. Explore the virtual side of earth science

    USGS Publications Warehouse

    ,

    1998-01-01

    Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).

  3. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.

  4. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  5. Control of vertical posture while elevating one foot to avoid a real or virtual obstacle.

    PubMed

    Ida, Hirofumi; Mohapatra, Sambit; Aruin, Alexander

    2017-06-01

    The purpose of this study is to investigate the control of vertical posture during obstacle avoidance in a real versus a virtual reality (VR) environment. Ten healthy participants stood upright and lifted one leg to avoid colliding with a real obstacle sliding on the floor toward a participant and with its virtual image. Virtual obstacles were delivered by a head mounted display (HMD) or a 3D projector. The acceleration of the foot, center of pressure, and electrical activity of the leg and trunk muscles were measured and analyzed during the time intervals typical for early postural adjustments (EPAs), anticipatory postural adjustments (APAs), and compensatory postural adjustments (CPAs). The results showed that the peak acceleration of foot elevation in the HMD condition decreased significantly when compared with that of the real and 3D projector conditions. Reduced activity of the leg and trunk muscles was seen when dealing with virtual obstacles (HMD and 3D projector) as compared with that seen when dealing with real obstacles. These effects were more pronounced during APAs and CPAs. The onsets of muscle activities in the supporting limb were seen during EPAs and APAs. The observed modulation of muscle activity and altered patterns of movement seen while avoiding a virtual obstacle should be considered when designing virtual rehabilitation protocols.

  6. Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java

    NASA Astrophysics Data System (ADS)

    Cao, Zaihui; hu, Zhongyan

    Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.

  7. Research on three-dimensional visualization based on virtual reality and Internet

    NASA Astrophysics Data System (ADS)

    Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai

    2007-06-01

    To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.

  8. Augmented reality: past, present, future

    NASA Astrophysics Data System (ADS)

    Inzerillo, Laura

    2013-03-01

    A great opportunity has permitted to carry out a cultural, historical, architectural and social research with great impact factor on the international cultural interest. We are talking about the realization of a museum whose the main theme is the visit and the discovery of a monument of great prestige: the monumental building the "Steri" in Palermo. The museum is divided into sub themes including the one above all, that has aroused the international interest so much that it has been presented the instance to include the museum in the cultural heritage of UNESCO. It is the realization of a museum path that regards the cells of the Inquisition, which are located just inside of some buildings of the monumental building. The project, as a whole, is faced, in a total view, between the various competences implicated: historic, chemic, architectonic, topographic, drawing, representation, virtual communication, informatics. The birth of the museum will be a sum of the results of all these disciplines involved. Methodology, implementation, fruition, virtual museum, goals, 2D graphic restitution, effects on the cultural heritage and landscape environmental, augmented reality, Surveying 2D and 3D, hi-touch screen, Photogrammetric survey, Photographic survey, representation, drawing 3D and more than this has been dealt with this research.

  9. The effects of virtual reality game exercise on balance and gait of the elderly

    PubMed Central

    Park, Eun-Cho; Kim, Seong-Gil; Lee, Chae-Woo

    2015-01-01

    [Purpose] The aim of this study was to examine the effects of ball exercise as a general exercise on the balance abilities of elderly individuals by comparing ball exercise with virtual reality exercise. [Subjects and Methods] Thirty elderly individuals residing in communities were randomly divided into a virtual reality game group and a ball exercise group and conducted exercise for 30 min 3 times a week for 8 weeks. [Results] Step length increased significantly, and the average sway speed and Timed Up and Go time significantly decreased in both groups. A comparison of sway length after the intervention between the two groups revealed that the virtual reality game exercise resulted in a reduction than the ball exercise. [Conclusion] The results of this study indicated that the virtual reality game exercise may improve balance and gait of elderly individuals in communities. PMID:25995578

  10. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  11. Holographic and light-field imaging for augmented reality

    NASA Astrophysics Data System (ADS)

    Lee, Byoungho; Hong, Jong-Young; Jang, Changwon; Jeong, Jinsoo; Lee, Chang-Kun

    2017-02-01

    We discuss on the recent state of the augmented reality (AR) display technology. In order to realize AR, various seethrough three-dimensional (3D) display techniques have been reported. We describe the AR display with 3D functionality such as light-field display and holography. See-through light-field display can be categorized by the optical elements which are used for see-through property: optical elements controlling path of the light-fields and those generating see-through light-field. Holographic display can be also a good candidate for AR display because it can reconstruct wavefront information and provide realistic virtual information. We introduce the see-through holographic display using various optical techniques.

  12. A succinct overview of virtual reality technology use in Alzheimer's disease.

    PubMed

    García-Betances, Rebeca I; Arredondo Waldmeyer, María Teresa; Fico, Giuseppe; Cabrera-Umpiérrez, María Fernanda

    2015-01-01

    We provide a brief review and appraisal of recent and current virtual reality (VR) technology for Alzheimer's disease (AD) applications. We categorize them according to their intended purpose (e.g., diagnosis, patient cognitive training, caregivers' education, etc.), focus feature (e.g., spatial impairment, memory deficit, etc.), methodology employed (e.g., tasks, games, etc.), immersion level, and passive or active interaction. Critical assessment indicates that most of them do not yet take full advantage of virtual environments with high levels of immersion and interaction. Many still rely on conventional 2D graphic displays to create non-immersive or semi-immersive VR scenarios. Important improvements are needed to make VR a better and more versatile assessment and training tool for AD. The use of the latest display technologies available, such as emerging head-mounted displays and 3D smart TV technologies, together with realistic multi-sensorial interaction devices, and neuro-physiological feedback capacity, are some of the most beneficial improvements this mini-review suggests. Additionally, it would be desirable that such VR applications for AD be easily and affordably transferable to in-home and nursing home environments.

  13. Virtual reality in rhinology-a new dimension of clinical experience.

    PubMed

    Klapan, Ivica; Raos, Pero; Galeta, Tomislav; Kubat, Goranka

    2016-07-01

    There is often a need to more precisely identify the extent of pathology and the fine elements of intracranial anatomic features during the diagnostic process and during many operations in the nose, sinus, orbit, and skull base region. In two case reports, we describe the methods used in the diagnostic workup and surgical therapy in the nose and paranasal sinus region. Besides baseline x-ray, multislice computed tomography, and magnetic resonance imaging, operative field imaging was performed via a rapid prototyping model, virtual endoscopy, and 3-D imaging. Different head tissues were visualized in different colors, showing their anatomic interrelations and the extent of pathologic tissue within the operative field. This approach has not yet been used as a standard preoperative or intraoperative procedure in otorhinolaryngology. In this way, we tried to understand the new, visualized "world of anatomic relations within the patient's head" by creating an impression of perception (virtual perception) of the given position of all elements in a particular anatomic region of the head, which does not exist in the real world (virtual world). This approach was aimed at upgrading the diagnostic workup and surgical therapy by ensuring a faster, safer and, above all, simpler operative procedure. In conclusion, any ENT specialist can provide virtual reality support in implementing surgical procedures, with additional control of risks and within the limits of normal tissue, without additional trauma to the surrounding tissue in the anatomic region. At the same time, the virtual reality support provides an impression of the virtual world as the specialist navigates through it and manipulates virtual objects.

  14. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    PubMed

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  15. Development of monograph titled "augmented chemistry aldehida & keton" with 3 dimensional (3D) illustration as a supplement book on chemistry learning

    NASA Astrophysics Data System (ADS)

    Damayanti, Latifah Adelina; Ikhsan, Jaslin

    2017-05-01

    Integration of information technology in education more rapidly performed in a medium of learning. Three-dimensional (3D) molecular modeling was performed in Augmented Reality as a tangible manifestation of increasingly modern technology utilization. Based on augmented reality, three-dimensional virtual object is projected in real time and the exact environment. This paper reviewed the uses of chemical learning supplement book of aldehydes and ketones which are equipped with three-dimensional molecular modeling by which students can inspect molecules from various viewpoints. To plays the 3D illustration printed on the book, smartphones with the open-source software of the technology based integrated Augmented Reality can be used. The aims of this research were to develop the monograph of aldehydes and ketones with 3 dimensional (3D) illustrations, to determine the specification of the monograph, and to determine the quality of the monograph. The quality of the monograph is evaluated by experiencing chemistry teachers on the five aspects of contents/materials, presentations, language and images, graphs, and software engineering, resulted in the result that the book has a very good quality to be used as a chemistry learning supplement book.

  16. Virtual Heritage Tours: Developing Interactive Narrative-Based Environments for Historical Sites

    NASA Astrophysics Data System (ADS)

    Tuck, Deborah; Kuksa, Iryna

    In the last decade there has been a noticeable growth in the use of virtual reality (VR) technologies for reconstructing cultural heritage sites. However, many of these virtual reconstructions evidence little of sites' social histories. Narrating the Past is a research project that aims to re-address this issue by investigating methods for embedding social histories within cultural heritage sites and by creating narrative based virtual environments (VEs) within them. The project aims to enhance the visitor's knowledge and understanding by developing a navigable 3D story space, in which participants are immersed. This has the potential to create a malleable virtual environment allowing the visitor to configure their own narrative paths.

  17. Enhancing the Induction Skill of Deaf and Hard-of-Hearing Children with Virtual Reality Technology.

    PubMed

    Passig, D; Eden, S

    2000-01-01

    Many researchers have found that for reasoning and reaching a reasoned conclusion, particularly when the process of induction is required, deaf and hard-of-hearing children have unusual difficulty. The purpose of this study was to investigate whether the practice of rotating virtual reality (VR) three-dimensional (3D) objects will have a positive effect on the ability of deaf and hard-of-hearing children to use inductive processes when dealing with shapes. Three groups were involved in the study: (1) experimental group, which included 21 deaf and hard-of-hearing children, who played a VR 3D game; (2) control group I, which included 23 deaf and hard-of-hearing children, who played a similar two-dimensional (2D) game (not VR game); and (3) control group II of 16 hearing children for whom no intervention was introduced. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved inductive thinking used by the experimental group for shapes as compared with the first control group, who did not significantly improve their performance. Also, prior to the VR 3D experience, the deaf and hard-of-hearing children attained lower scores in inductive abilities than the children with normal hearing, (control group II). The results for the experimental group, after the VR 3D experience, improved to the extent that there was no noticeable difference between them and the children with normal hearing.

  18. Virtual reality and live simulation: a comparison between two simulation tools for assessing mass casualty triage skills.

    PubMed

    Luigi Ingrassia, Pier; Ragazzoni, Luca; Carenzo, Luca; Colombo, Davide; Ripoll Gallardo, Alba; Della Corte, Francesco

    2015-04-01

    This study tested the hypothesis that virtual reality simulation is equivalent to live simulation for testing naive medical students' abilities to perform mass casualty triage using the Simple Triage and Rapid Treatment (START) algorithm in a simulated disaster scenario and to detect the improvement in these skills after a teaching session. Fifty-six students in their last year of medical school were randomized into two groups (A and B). The same scenario, a car accident, was developed identically on the two simulation methodologies: virtual reality and live simulation. On day 1, group A was exposed to the live scenario and group B was exposed to the virtual reality scenario, aiming to triage 10 victims. On day 2, all students attended a 2-h lecture on mass casualty triage, specifically the START triage method. On day 3, groups A and B were crossed over. The groups' abilities to perform mass casualty triage in terms of triage accuracy, intervention correctness, and speed in the scenarios were assessed. Triage and lifesaving treatment scores were assessed equally by virtual reality and live simulation on day 1 and on day 3. Both simulation methodologies detected an improvement in triage accuracy and treatment correctness from day 1 to day 3 (P<0.001). The time to complete each scenario and its decrease from day 1 to day 3 were detected equally in the two groups (P<0.05). Virtual reality simulation proved to be a valuable tool, equivalent to live simulation, to test medical students' abilities to perform mass casualty triage and to detect improvement in such skills.

  19. 3-D surface reconstruction of patient specific anatomic data using a pre-specified number of polygons.

    PubMed

    Aharon, S; Robb, R A

    1997-01-01

    Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.

  20. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system

    PubMed Central

    Aronov, Dmitriy; Tank, David W.

    2015-01-01

    SUMMARY Virtual reality (VR) enables precise control of an animal’s environment and otherwise impossible experimental manipulations. Neural activity in navigating rodents has been studied on virtual linear tracks. However, the spatial navigation system’s engagement in complete two-dimensional environments has not been shown. We describe a VR setup for rats, including control software and a large-scale electrophysiology system, which supports 2D navigation by allowing animals to rotate and walk in any direction. The entorhinal-hippocampal circuit, including place cells, grid cells, head direction cells and border cells, showed 2D activity patterns in VR similar to those in the real world. Hippocampal neurons exhibited various remapping responses to changes in the appearance or the shape of the virtual environment, including a novel form in which a VR-induced cue conflict caused remapping to lock to geometry rather than salient cues. These results suggest a general-purpose tool for novel types of experimental manipulations in navigating rats. PMID:25374363

  1. Reasons to Use Virtual Reality in Education and Training Courses and a Model to Determine When to Use Virtual Reality

    ERIC Educational Resources Information Center

    Pantelidis, Veronica S.

    2009-01-01

    Many studies have been conducted on the use of virtual reality in education and training. This article lists examples of such research. Reasons to use virtual reality are discussed. Advantages and disadvantages of using virtual reality are presented, as well as suggestions on when to use and when not to use virtual reality. A model that can be…

  2. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.

  3. Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.

    PubMed

    Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.

  4. Effects of anodal transcranial direct current stimulation combined with virtual reality for improving gait in children with spastic diparetic cerebral palsy: a pilot, randomized, controlled, double-blind, clinical trial.

    PubMed

    Collange Grecco, Luanda André; de Almeida Carvalho Duarte, Natália; Mendonça, Mariana E; Galli, Manuela; Fregni, Felipe; Oliveira, Claudia Santos

    2015-12-01

    To compare the effects of anodal vs. sham transcranial direct current stimulation combined with virtual reality training for improving gait in children with cerebral palsy. A pilot, randomized, controlled, double-blind, clinical trial. Rehabilitation clinics. A total of 20 children with diparesis owing to cerebral palsy. The experimental group received anodal stimulation and the control group received sham stimulation over the primary motor cortex during virtual reality training. All patients underwent the same training programme involving a virtual reality (10 sessions). Evaluations were performed before and after the intervention as well as at the one-month follow-up and involved gait analysis, the Gross Motor Function Measure, the Pediatric Evaluation Disability Inventory and the determination of motor evoked potentials. The experimental group had a better performance regarding gait velocity (experimental group: 0.63 ±0.17 to 0.85 ±0.11 m/s; control group: 0.73 ±0.15 to 0.61 ±0.15 m/s), cadence (experimental group: 97.4 ±14.1 to 116.8 ±8.7 steps/minute; control group: 92.6 ±10.4 to 99.7 ±9.7 steps/minute), gross motor function (dimension D experimental group: 59.7 ±12.8 to 74.9 ±13.8; control group: 58.9 ±10.4 to 69.4 ±9.3; dimension E experimental group: 59.0 ±10.9 to 79.1 ±8.5; control group: 60.3 ±10.1 to 67.4 ±11.4) and independent mobility (experimental group: 34.3 ±5.9 to 43.8 ±75.3; control group: 34.4 ±8.3 to 37.7 ±7.7). Moreover, transcranial direct current stimulation led to a significant increase in motor evoked potential (experimental group: 1.4 ±0.7 to 2.6 ±0.4; control group: 1.3 ±0.6 to 1.6 ±0.4). These preliminary findings support the hypothesis that anodal transcranial direct current stimulation combined with virtual reality training could be a useful tool for improving gait in children with cerebral palsy. © The Author(s) 2015.

  5. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    NASA Astrophysics Data System (ADS)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  6. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  7. A tale of two trainers: virtual reality versus a video trainer for acquisition of basic laparoscopic skills.

    PubMed

    Debes, Anders J; Aggarwal, Rajesh; Balasundaram, Indran; Jacobsen, Morten B

    2010-06-01

    This study aimed to assess the transferability of basic laparoscopic skills between a virtual reality simulator (MIST-VR) and a video trainer box (D-Box). Forty-six medical students were randomized into 2 groups, training on MIST-VR or D-Box. After training with one modality, a crossover assessment on the other was performed. When tested on MIST-VR, the MIST-VR group showed significantly shorter time (90.3 seconds vs 188.6 seconds, P <.001), better economy of movements (4.40 vs 7.50, P <.001), and lower score (224.7 vs 527.0, P <.001). However, when assessed on the D-Box, there was no difference between the groups for time (402.0 seconds vs 325.6 seconds, P = .152), total hand movements (THC) (289 vs 262, P = .792), or total path length (TPL) (34.9 m vs 34.6 m, P = .388). Both simulators provide significant improvement in performance. Our results indicate that skills learned on the MIST-VR are transferable to the D-Box, but the opposite cannot be demonstrated. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Modulation of thermal pain-related brain activity with virtual reality: evidence from fMRI.

    PubMed

    Hoffman, Hunter G; Richards, Todd L; Coda, Barbara; Bills, Aric R; Blough, David; Richards, Anne L; Sharar, Sam R

    2004-06-07

    This study investigated the neural correlates of virtual reality analgesia. Virtual reality significantly reduced subjective pain ratings (i.e. analgesia). Using fMRI, pain-related brain activity was measured for each participant during conditions of no virtual reality and during virtual reality (order randomized). As predicted, virtual reality significantly reduced pain-related brain activity in all five regions of interest; the anterior cingulate cortex, primary and secondary somatosensory cortex, insula, and thalamus (p<0.002, corrected). Results showed direct modulation of human brain pain responses by virtual reality distraction. Copyright 2004 Lippincott Williams and Wilkins

  9. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery.

    PubMed

    Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng

    2012-12-01

    Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.

  10. Human Activity Modeling and Simulation with High Biofidelity

    DTIC Science & Technology

    2013-01-01

    Human activity Modeling and Simulation (M&S) plays an important role in simulation-based training and Virtual Reality (VR). However, human activity M...kinematics and motion mapping/creation; and (e) creation and replication of human activity in 3-D space with true shape and motion. A brief review is

  11. Quick realization of a ship steering training simulation system by virtual reality

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Zhi, Pinghua; Nie, Weiguo

    2003-09-01

    This paper addresses two problems of a ship handling simulator. Firstly, 360 scene generation, especially 3D dynamic sea wave modeling, is described. Secondly, a multi-computer complementation of ship handling simulator. This paper also gives the experimental results of the proposed ship handling simulator.

  12. The Virtual Pelvic Floor, a tele-immersive educational environment.

    PubMed Central

    Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.

    1999-01-01

    This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378

  13. The study of early human embryos using interactive 3-dimensional computer reconstructions.

    PubMed

    Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C

    1997-07-01

    Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.

  14. Dental impressions using 3D digital scanners: virtual becomes reality.

    PubMed

    Birnbaum, Nathan S; Aaronson, Heidi B

    2008-10-01

    The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.

  15. Modeling of luminance distribution in CAVE-type virtual reality systems

    NASA Astrophysics Data System (ADS)

    Meironke, Michał; Mazikowski, Adam

    2017-08-01

    At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.

  16. Familiarity from the configuration of objects in 3-dimensional space and its relation to déjà vu: a virtual reality investigation.

    PubMed

    Cleary, Anne M; Brown, Alan S; Sawyer, Benjamin D; Nomi, Jason S; Ajoku, Adaeze C; Ryals, Anthony J

    2012-06-01

    Déjà vu is the striking sense that the present situation feels familiar, alongside the realization that it has to be new. According to the Gestalt familiarity hypothesis, déjà vu results when the configuration of elements within a scene maps onto a configuration previously seen, but the previous scene fails to come to mind. We examined this using virtual reality (VR) technology. When a new immersive VR scene resembled a previously-viewed scene in its configuration but people failed to recall the previously-viewed scene, familiarity ratings and reports of déjà vu were indeed higher than for completely novel scenes. People also exhibited the contrasting sense of newness and of familiarity that is characteristic of déjà vu. Familiarity ratings and déjà vu reports among scenes recognized as new increased with increasing feature-match of a scene to one stored in memory, suggesting that feature-matching can produce familiarity and déjà vu when recall fails. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Virtual reality and paranoid ideations in people with an 'at-risk mental state' for psychosis.

    PubMed

    Valmaggia, Lucia R; Freeman, Daniel; Green, Catherine; Garety, Philippa; Swapp, David; Antley, Angus; Prescott, Corinne; Fowler, David; Kuipers, Elizabeth; Bebbington, Paul; Slater, Mel; Broome, Matthew; McGuire, Philip K

    2007-12-01

    Virtual reality provides a means of studying paranoid thinking in controlled laboratory conditions. However, this method has not been used with a clinical group. To establish the feasibility and safety of using virtual reality methodology in people with an at-risk mental state and to investigate the applicability of a cognitive model of paranoia to this group. Twenty-one participants with an at-risk mental state were assessed before and after entering a virtual reality environment depicting the inside of an underground train. Virtual reality did not raise levels of distress at the time of testing or cause adverse experiences over the subsequent week. Individuals attributed mental states to virtual reality characters including hostile intent. Persecutory ideation in virtual reality was predicted by higher levels of trait paranoia, anxiety, stress, immersion in virtual reality, perseveration and interpersonal sensitivity. Virtual reality is an acceptable experimental technique for use with individuals with at-risk mental states. Paranoia in virtual reality was understandable in terms of the cognitive model of persecutory delusions.

  18. A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Shrum, Richard; Miller, Joel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Our work in virtual reality systems at NASA Ames Research Center includes the area of aurally-guided visual search, using specially-designed audio cues and spatial audio processing (also known as virtual or "3-D audio") techniques (Begault, 1994). Previous studies at Ames had revealed that use of 3-D audio for Traffic Collision Avoidance System (TCAS) advisories significantly reduced head-down time, compared to a head-down map display (0.5 sec advantage) or no display at all (2.2 sec advantage) (Begault, 1993, 1995; Begault & Pittman, 1994; see Wenzel, 1994, for an audio demo). Since the crew must keep their head up and looking out the window as much as possible when taxiing under low-visibility conditions, and the potential for "blunder" is increased under such conditions, it was sensible to evaluate the audio spatial cueing for a prototype audio ground collision avoidance warning (GCAW) system, and a 3-D audio guidance system. Results were favorable for GCAW, but not for the audio guidance system.

  19. D Modelling and Mapping for Virtual Exploration of Underwater Archaeology Assets

    NASA Astrophysics Data System (ADS)

    Liarokapis, F.; Kouřil, P.; Agrafiotis, P.; Demesticha, S.; Chmelík, J.; Skarlatos, D.

    2017-02-01

    This paper investigates immersive technologies to increase exploration time in an underwater archaeological site, both for the public, as well as, for researchers and scholars. Focus is on the Mazotos shipwreck site in Cyprus, which is located 44 meters underwater. The aim of this work is two-fold: (a) realistic modelling and mapping of the site and (b) an immersive virtual reality visit. For 3D modelling and mapping optical data were used. The underwater exploration is composed of a variety of sea elements including: plants, fish, stones, and artefacts, which are randomly positioned. Users can experience an immersive virtual underwater visit in Mazotos shipwreck site and get some information about the shipwreck and its contents for raising their archaeological knowledge and cultural awareness.

  20. 3D Modeling of Glacial Erratic Boulders in the Haizi Shan Region, Eastern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Sheriff, M.; Stevens, J.; Radue, M. J.; Strand, P.; Zhou, W.; Putnam, A. E.

    2017-12-01

    The focus of our team's research is to study patterns of glacier retreat in the Northern and Southern Hemispheres at the end of the last ice age. Our purpose is to search for what caused this great global warming. Such information will improve understanding of how the climate system may respond to the human-induced buildup of fossil carbon dioxide. To reconstruct past glacier behavior, we sample boulders deposited by glaciers to find the rate of ancient recession. Each sample is tested to determine the age of the boulder using 10Be cosmogenic-nuclide dating. My portion of this research focuses on creating 3D models of the sampled boulders. Such high-resolution 3D models afford visual inspection and analysis of each boulder in a virtual reality environment after fieldwork is complete. Such detailed virtual reconstructions will aid post-fieldwork evaluation of sampled boulders. This will help our team interpret 10Be dating results. For example, a high-resolution model can aid post-fieldwork observations, and allow scientists to determine whether the rock has been previously covered, eroded, or moved since it was deposited by the glacier, but before the sample was collected. Also a model can be useful for recognizing patterns between age and boulder morphology. Lastly, the models can be used for those who wish to review the data after publication. To create the 3D models, I will use Hero4 GoPro and Canon PowerShot digital cameras to collect photographs of each boulder from different angles. I will then process the digital imagery using `structure-from-motion' techniques and Agisoft Photoscan software. All boulder photographs will be synthesized to 3D and based on a standardized scale. We will then import these models into an environment that can be accessed using cutting-edge virtual reality technology. By producing a virtual archive of 3D glacial boulder reconstructions, I hope to provide deeper insight into geological processes influencing these boulders during and since their deposition, and ultimately to improve methods that are being used to develop glacial histories on a global scale.

  1. Development and Validation of a Novel Robotic Procedure Specific Simulation Platform: Partial Nephrectomy.

    PubMed

    Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S

    2015-08-01

    We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  3. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  4. Alleviating travel anxiety through virtual reality and narrated video technology.

    PubMed

    Ahn, J C; Lee, O

    2013-01-01

    This study presents an empirical evidence of benefit of narrative video clips in embedded virtual reality websites of hotels for relieving travel anxiety. Even though it was proven that virtual reality functions do provide some relief in travel anxiety, a stronger virtual reality website can be built when narrative video clips that show video clips with narration about important aspects of the hotel. We posit that these important aspects are 1. Escape route and 2. Surrounding neighborhood information, which are derived from the existing research on anxiety disorder as well as travel anxiety. Thus we created a video clip that showed and narrated about the escape route from the hotel room, another video clip that showed and narrated about surrounding neighborhood. We then conducted experiments with this enhanced virtual reality website of a hotel by having human subjects play with the website and fill out a questionnaire. The result confirms our hypothesis that there is a statistically significant relationship between the degree of travel anxiety and psychological relief caused by the use of embedded virtual reality functions with narrative video clips of a hotel website (Tab. 2, Fig. 3, Ref. 26).

  5. Effects of parasagittal meningiomas on intracranial venous circulation assessed by the virtual reality technology.

    PubMed

    Wang, Shousen; Ying, Jianbin; Wei, Liangfeng; Li, Shiqing; Jing, Junjie

    2015-01-01

    This study is to investigate the compensatory intracranial venous pathways in parasagittal meningiomas (PSM) patients by virtual reality technology. A total of 48 PSM patients (tumor group) and 20 patients with trigeminal neuralgia and hemifacial spasm but without intracranial venous diseases (control group) were enrolled. All patients underwent 3D CE-MRV examination. The 3D reconstructed images by virtual reality technology were used for assessment of diameter and number of intracranial veins, tumor location, venous sinus invasion degree and collateral circulation formation. Diameter of bridging veins in posterior 1/3 superior sagittal sinus (SSS) in tumor group was significantly smaller than that of the control group (P < 0.05). For tumors located in mid 1/3 SSS, diameter of bridging veins and vein of Labbé (VL) in posterior 1/3 SSS decreased significantly (P < 0.05). For tumors located in posterior 1/3 SSS, bridging vein number and transverse sinus (TS) diameter significantly decreased while superficial Sylvian vein (SSV) diameter increased significantly (P < 0.05). Compared with tumor in posterior 1/3 SSS subgroup, number of bridging veins in the tumor in mid 1/3 SSS subgroup increased significantly (P < 0.05). Compared with control group, only the bridging vein number in anterior 1/3 SSS segment in invasion Type 3-4 tumor subgroup decreased significantly (P < 0.05). Diameter of TS and bridging veins in posterior 1/3 SSS segment in sinus invasion Type 5-6 tumor subgroup decreased significantly (P < 0.05). Compared with control group, only the diameter of VL and TS of collateral circulation Grade 1 tumor subgroup decreased significantly (P < 0.05) while in Grade 3 tumor subgroup, TS diameter decreased and SSV diameter increased significantly (P < 0.05). The intracranial blood flow is mainly drained through SSV drainage after SSS occlusion by PSM.

  6. Effects of parasagittal meningiomas on intracranial venous circulation assessed by the virtual reality technology

    PubMed Central

    Wang, Shousen; Ying, Jianbin; Wei, Liangfeng; Li, Shiqing; Jing, Junjie

    2015-01-01

    Objective: This study is to investigate the compensatory intracranial venous pathways in parasagittal meningiomas (PSM) patients by virtual reality technology. Methods: A total of 48 PSM patients (tumor group) and 20 patients with trigeminal neuralgia and hemifacial spasm but without intracranial venous diseases (control group) were enrolled. All patients underwent 3D CE-MRV examination. The 3D reconstructed images by virtual reality technology were used for assessment of diameter and number of intracranial veins, tumor location, venous sinus invasion degree and collateral circulation formation. Results: Diameter of bridging veins in posterior 1/3 superior sagittal sinus (SSS) in tumor group was significantly smaller than that of the control group (P < 0.05). For tumors located in mid 1/3 SSS, diameter of bridging veins and vein of Labbé (VL) in posterior 1/3 SSS decreased significantly (P < 0.05). For tumors located in posterior 1/3 SSS, bridging vein number and transverse sinus (TS) diameter significantly decreased while superficial Sylvian vein (SSV) diameter increased significantly (P < 0.05). Compared with tumor in posterior 1/3 SSS subgroup, number of bridging veins in the tumor in mid 1/3 SSS subgroup increased significantly (P < 0.05). Compared with control group, only the bridging vein number in anterior 1/3 SSS segment in invasion Type 3-4 tumor subgroup decreased significantly (P < 0.05). Diameter of TS and bridging veins in posterior 1/3 SSS segment in sinus invasion Type 5-6 tumor subgroup decreased significantly (P < 0.05). Compared with control group, only the diameter of VL and TS of collateral circulation Grade 1 tumor subgroup decreased significantly (P < 0.05) while in Grade 3 tumor subgroup, TS diameter decreased and SSV diameter increased significantly (P < 0.05). Conclusions: The intracranial blood flow is mainly drained through SSV drainage after SSS occlusion by PSM. PMID:26550184

  7. Establishment of Next-Generation Neurosurgery Research and Training Laboratory with Integrated Human Performance Monitoring.

    PubMed

    Bernardo, Antonio

    2017-10-01

    Quality of neurosurgical care and patient outcomes are inextricably linked to surgical and technical proficiency and a thorough working knowledge of microsurgical anatomy. Neurosurgical laboratory-based cadaveric training is essential for the development and refinement of technical skills before their use on a living patient. Recent biotechnological advances including 3-dimensional (3D) microscopy and endoscopy, 3D printing, virtual reality, surgical simulation, surgical robotics, and advanced neuroimaging have proved to reduce the learning curve, improve conceptual understanding of complex anatomy, and enhance visuospatial skills in neurosurgical training. Until recently, few means have allowed surgeons to obtain integrated surgical and technological training in an operating room setting. We report on a new model, currently in use at our institution, for technologically integrated surgical training and innovation using a next-generation microneurosurgery skull base laboratory designed to recreate the setting of a working operating room. Each workstation is equipped with a 3D surgical microscope, 3D endoscope, surgical drills, operating table with a Mayfield head holder, and a complete set of microsurgical tools. The laboratory also houses a neuronavigation system, a surgical robotic, a surgical planning system, 3D visualization, virtual reality, and computerized simulation for training of surgical procedures and visuospatial skills. In addition, the laboratory is equipped with neurophysiological monitoring equipment in order to conduct research into human factors in surgery and the respective roles of workload and fatigue on surgeons' performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Predicting the effectiveness of virtual reality relaxation on pain and anxiety when added to PCA morphine in patients having burns dressings changes.

    PubMed

    Konstantatos, A H; Angliss, M; Costello, V; Cleland, H; Stafrace, S

    2009-06-01

    Pain arising in burns sufferers is often severe and protracted. The prospect of a dressing change can heighten existing pain by impacting both physically and psychologically. In this trial we examined whether pre-procedural virtual reality guided relaxation added to patient controlled analgesia with morphine reduced pain severity during awake dressings changes in burns patients. We conducted a prospective randomized clinical trial in all patients with burns necessitating admission to a tertiary burns referral centre. Eligible patients requiring awake dressings changes were randomly allocated to single use virtual reality relaxation plus intravenous morphine patient controlled analgesia (PCA) infusion or to intravenous morphine patient controlled analgesia infusion alone. Patients rated their worst pain intensity during the dressing change using a visual analogue scale. The primary outcome measure was presence of 30% or greater difference in pain intensity ratings between the groups in estimation of worst pain during the dressing change. Of 88 eligible and consenting patients having awake dressings changes, 43 were assigned to virtual reality relaxation plus intravenous morphine PCA infusion and 43 to morphine PCA infusion alone. The group receiving virtual reality relaxation plus morphine PCA infusion reported significantly higher pain intensities during the dressing change (mean=7.3) compared with patients receiving morphine PCA alone (mean=5.3) (p=0.003) (95% CI 0.6-2.8). The addition of virtual reality guided relaxation to morphine PCA infusion in burns patients resulted in a significant increase in pain experienced during awake dressings changes. In the absence of a validated predictor for responsiveness to virtual reality relaxation such a therapy cannot be recommended for general use in burns patients having awake dressings changes.

  9. Virtual reality simulators and training in laparoscopic surgery.

    PubMed

    Yiannakopoulou, Eugenia; Nikiteas, Nikolaos; Perrea, Despina; Tsigris, Christos

    2015-01-01

    Virtual reality simulators provide basic skills training without supervision in a controlled environment, free of pressure of operating on patients. Skills obtained through virtual reality simulation training can be transferred on the operating room. However, relative evidence is limited with data available only for basic surgical skills and for laparoscopic cholecystectomy. No data exist on the effect of virtual reality simulation on performance on advanced surgical procedures. Evidence suggests that performance on virtual reality simulators reliably distinguishes experienced from novice surgeons Limited available data suggest that independent approach on virtual reality simulation training is not different from proctored approach. The effect of virtual reality simulators training on acquisition of basic surgical skills does not seem to be different from the effect the physical simulators. Limited data exist on the effect of virtual reality simulation training on the acquisition of visual spatial perception and stress coping skills. Undoubtedly, virtual reality simulation training provides an alternative means of improving performance in laparoscopic surgery. However, future research efforts should focus on the effect of virtual reality simulation on performance in the context of advanced surgical procedure, on standardization of training, on the possibility of synergistic effect of virtual reality simulation training combined with mental training, on personalized training. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Frontal Alpha Oscillations and Attentional Control: A Virtual Reality Neurofeedback Study.

    PubMed

    Berger, Anna M; Davelaar, Eddy J

    2018-05-15

    Two competing views about alpha oscillations suggest that cortical alpha reflect either cortical inactivity or cortical processing efficiency. We investigated the role of alpha oscillations in attentional control, as measured with a Stroop task. We used neurofeedback to train 22 participants to increase their level of alpha amplitude. Based on the conflict/control loop theory, we selected to train prefrontal alpha and focus on the Gratton effect as an index of deployment of attentional control. We expected an increase or a decrease in the Gratton effect with increase in neural learning depending on whether frontal alpha oscillations reflect cortical idling or enhanced processing efficiency, respectively. In order to induce variability in neural learning beyond natural occurring individual differences, we provided half of the participants with feedback on alpha amplitude in a 3-dimensional (3D) virtual reality environment and the other half received feedback in a 2D environment. Our results showed variable neural learning rates, with larger rates in the 3D compared to the 2D group, corroborating prior evidence of individual differences in EEG-based learning and the influence of a virtual environment. Regression analyses revealed a significant association between the learning rate and changes on deployment of attentional control, with larger learning rates being associated with larger decreases in the Gratton effect. This association was not modulated by feedback medium. The study supports the view of frontal alpha oscillations being associated with efficient neurocognitive processing and demonstrates the utility of neurofeedback training in addressing theoretical questions in the non-neurofeedback literature. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Learning Rationales and Virtual Reality Technology in Education.

    ERIC Educational Resources Information Center

    Chiou, Guey-Fa

    1995-01-01

    Defines and describes virtual reality technology and differentiates between virtual learning environment, learning material, and learning tools. Links learning rationales to virtual reality technology to pave conceptual foundations for application of virtual reality technology education. Constructivism, case-based learning, problem-based learning,…

  12. A Survey of Mobile and Wireless Technologies for Augmented Reality Systems (Preprint)

    DTIC Science & Technology

    2008-02-01

    Windows XP. A number of researchers have started employing them in AR simulations such as Wagner et al [25], Newman et al [46] and specifically the Sony ...different music clubs and styles of music according to the selection and tastes of the listeners. In the intro sequence the user can select an animated...3-D character (avatar) as his or her virtual persona and visit the different music rooms in the virtual disco. Users can download or stream music in

  13. Emohawk: Searching for a "Good" Emergent Narrative

    NASA Astrophysics Data System (ADS)

    Brom, Cyril; Bída, Michal; Gemrot, Jakub; Kadlec, Rudolf; Plch, Tomáš

    We report on the progress we have achieved in development of Emohawk, a 3D virtual reality application with an emergent narrative for teaching high-school students and undergraduates the basics of virtual characters control, emotion modelling, and narrative generation. Besides, we present a new methodology, used in Emohawk, for purposeful authoring of emergent narratives of Façade's complexity. The methodology is based on massive automatic search for stories that are appealing to the audience whilst forbidding the unappealing ones during the design phase.

  14. Virtual reality for emergency training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altinkemer, K.

    1995-12-31

    Virtual reality is a sequence of scenes generated by a computer as a response to the five different senses. These senses are sight, sound, taste, touch, smell. Other senses that can be used in virtual reality include balance, pheromonal, and immunological senses. Many application areas include: leisure and entertainment, medicine, architecture, engineering, manufacturing, and training. Virtual reality is especially important when it is used for emergency training and management of natural disasters including earthquakes, floods, tornados and other situations which are hard to emulate. Classical training methods for these extraordinary environments lack the realistic surroundings that virtual reality can provide.more » In order for virtual reality to be a successful training tool the design needs to include certain aspects; such as how real virtual reality should be and how much fixed cost is entailed in setting up the virtual reality trainer. There are also pricing questions regarding the price per training session on virtual reality trainer, and the appropriate training time length(s).« less

  15. Population-based respiratory 4D motion atlas construction and its application for VR simulations of liver punctures

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Wilms, Matthias; Handels, Heinz

    2018-03-01

    Virtual reality (VR) training simulators of liver needle insertion in the hepatic area of breathing virtual patients often need 4D image data acquisitions as a prerequisite. Here, first a population-based breathing virtual patient 4D atlas is built and second the requirement of a dose-relevant or expensive acquisition of a 4D CT or MRI data set for a new patient can be mitigated by warping the mean atlas motion. The breakthrough contribution of this work is the construction and reuse of population-based, learned 4D motion models.

  16. Integration of the virtual model of a Stewart platform with the avatar of a vehicle in a virtual reality

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The development of methods of computer aided design and engineering allows conducting virtual tests, among others concerning motion simulation of technical means. The paper presents a method of integrating an object in the form of a virtual model of a Stewart platform with an avatar of a vehicle moving in a virtual environment. The area of the problem includes issues related to the problem of fidelity of mapping the work of the analyzed technical mean. The main object of investigations is a 3D model of a Stewart platform, which is a subsystem of the simulator designated for driving learning for disabled persons. The analyzed model of the platform, prepared for motion simulation, was created in the “Motion Simulation” module of a CAD/CAE class system Siemens PLM NX. Whereas the virtual environment, in which the moves the avatar of the passenger car, was elaborated in a VR class system EON Studio. The element integrating both of the mentioned software environments is a developed application that reads information from the virtual reality (VR) concerning the current position of the car avatar. Then, basing on the accepted algorithm, it sends control signals to respective joints of the model of the Stewart platform (CAD).

  17. Virtual Reality and the Virtual Library.

    ERIC Educational Resources Information Center

    Oppenheim, Charles

    1993-01-01

    Explains virtual reality, including proper and improper uses of the term, and suggests ways that libraries might be affected by it. Highlights include elements of virtual reality systems; possible virtual reality applications, including architecture, the chemical industry, transport planning, armed forces, and entertainment; and the virtual…

  18. Augmenting the access grid using augmented reality

    NASA Astrophysics Data System (ADS)

    Li, Ying

    2012-01-01

    The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.

  19. Virtual Reality

    DTIC Science & Technology

    1993-04-01

    until exhausted. SECURITY CLASSIFICATION OF THIS PAGE All other editions are obsolete. UNCLASSIFIED " VIRTUAL REALITY JAMES F. DAILEY, LIEUTENANT COLONEL...US" This paper reviews the exciting field of virtual reality . The author describes the basic concepts of virtual reality and finds that its numerous...potential benefits to society could revolutionize everyday life. The various components that make up a virtual reality system are described in detail

  20. Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback

    PubMed Central

    Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.

    2014-01-01

    Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200

  1. Stroke and neurodegenerative disorders. 3. Stroke: rehabilitation management.

    PubMed

    Bogey, Ross A; Geis, Carolyn C; Bryant, Phillip R; Moroz, Alex; O'neill, Bryan J

    2004-03-01

    This self-directed learning module highlights common rehabilitation issues in stroke survivors. Topics include spasticity, constraint-induced movement therapy, partial body weight-supported treadmill training, virtual reality training, vestibular retraining, aphasia treatment, and cognitive retraining. It is part of the study chapter on stroke and neurodegenerative disorders in the Self-Directed Physiatric Education Program for practitioners and trainees in physical medicine and rehabilitation. (a) To identify and review the treatment options for poststroke spasticity; (b) to review the use of body weight-supported treadmill training in stroke patients; (c) to describe virtual reality training as an adjunct in stroke rehabilitation; (d) to review vestibular rehabilitation; (e) to discuss advances in aphasia treatment; (f) to discuss cognitive retraining; and (g) to provide an update on treatment of neglect syndromes.

  2. An integrated pipeline to create and experience compelling scenarios in virtual reality

    NASA Astrophysics Data System (ADS)

    Springer, Jan P.; Neumann, Carsten; Reiners, Dirk; Cruz-Neira, Carolina

    2011-03-01

    One of the main barriers to create and use compelling scenarios in virtual reality is the complexity and time-consuming efforts for modeling, element integration, and the software development to properly display and interact with the content in the available systems. Still today, most virtual reality applications are tedious to create and they are hard-wired to the specific display and interaction system available to the developers when creating the application. Furthermore, it is not possible to alter the content or the dynamics of the content once the application has been created. We present our research on designing a software pipeline that enables the creation of compelling scenarios with a fair degree of visual and interaction complexity in a semi-automated way. Specifically, we are targeting drivable urban scenarios, ranging from large cities to sparsely populated rural areas that incorporate both static components (e. g., houses, trees) and dynamic components (e. g., people, vehicles) as well as events, such as explosions or ambient noise. Our pipeline has four basic components. First, an environment designer, where users sketch the overall layout of the scenario, and an automated method constructs the 3D environment from the information in the sketch. Second, a scenario editor used for authoring the complete scenario, incorporate the dynamic elements and events, fine tune the automatically generated environment, define the execution conditions of the scenario, and set up any data gathering that may be necessary during the execution of the scenario. Third, a run-time environment for different virtual-reality systems provides users with the interactive experience as designed with the designer and the editor. And fourth, a bi-directional monitoring system that allows for capturing and modification of information from the virtual environment. One of the interesting capabilities of our pipeline is that scenarios can be built and modified on-the-fly as they are being presented in the virtual-reality systems. Users can quickly prototype the basic scene using the designer and the editor on a control workstation. More elements can then be introduced into the scene from both the editor and the virtual-reality display. In this manner, users are able to gradually increase the complexity of the scenario with immediate feedback. The main use of this pipeline is the rapid development of scenarios for human-factors studies. However, it is applicable in a much more general context.

  3. An Introduction to 3-D Sound

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.

  4. Virtual reality for treatment compliance for people with serious mental illness.

    PubMed

    Välimäki, Maritta; Hätönen, Heli M; Lahti, Mari E; Kurki, Marjo; Hottinen, Anja; Metsäranta, Kiki; Riihimäki, Tanja; Adams, Clive E

    2014-10-08

    Virtual reality (VR) is computerised real-time technology, which can be used an alternative assessment and treatment tool in the mental health field. Virtual reality may take different forms to simulate real-life activities and support treatment. To investigate the effects of virtual reality to support treatment compliance in people with serious mental illness. We searched the Cochrane Schizophrenia Group Trials Register (most recent, 17th September 2013) and relevant reference lists. All relevant randomised studies comparing virtual reality with standard care for those with serious mental illnesses. We defined virtual reality as a computerised real-time technology using graphics, sound and other sensory input, which creates the interactive computer-mediated world as a therapeutic tool. All review authors independently selected studies and extracted data. For homogeneous dichotomous data the risk difference (RD) and the 95% confidence intervals (CI) were calculated on an intention-to-treat basis. For continuous data, we calculated mean differences (MD). We assessed risk of bias and created a 'Summary of findings' table using the GRADE approach. We identified three short-term trials (total of 156 participants, duration five to 12 weeks). Outcomes were prone to at least a moderate risk of overestimating positive effects. We found that virtual reality had little effects regarding compliance (3 RCTs, n = 156, RD loss to follow-up 0.02 CI -0.08 to 0.12, low quality evidence), cognitive functioning (1 RCT, n = 27, MD average score on Cognistat 4.67 CI -1.76 to 11.10, low quality evidence), social skills (1 RCT, n = 64, MD average score on social problem solving SPSI-R (Social Problem Solving Inventory - Revised) -2.30 CI -8.13 to 3.53, low quality evidence), or acceptability of intervention (2 RCTs, n = 92, RD 0.05 CI -0.09 to 0.19, low quality evidence). There were no data reported on mental state, insight, behaviour, quality of life, costs, service utilisation, or adverse effects. Satisfaction with treatment - measured using an un-referenced scale - and reported as "interest in training" was better for the virtual reality group (1 RCT, n = 64, MD 6.00 CI 1.39 to 10.61,low quality evidence). There is no clear good quality evidence for or against using virtual reality for treatment compliance among people with serious mental illness. If virtual reality is used, the experimental nature of the intervention should be clearly explained. High-quality studies should be undertaken in this area to explore any effects of this novel intervention and variations of approach.

  5. Virtual reality and the unfolding of higher dimensions

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2006-02-01

    As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.

  6. Virtual reality technology prevents accidents in extreme situations

    NASA Astrophysics Data System (ADS)

    Badihi, Y.; Reiff, M. N.; Beychok, S.

    2012-03-01

    This research is aimed at examining the added value of using Virtual Reality (VR) in a driving simulator to prevent road accidents, specifically by improving drivers' skills when confronted with extreme situations. In an experiment, subjects completed a driving scenario using two platforms: A 3-D Virtual Reality display system using an HMD (Head-Mounted Display), and a standard computerized display system based on a standard computer monitor. The results show that the average rate of errors (deviating from the driving path) in a VR environment is significantly lower than in the standard one. In addition, there was no compensation between speed and accuracy in completing the driving mission. On the contrary: The average speed was even slightly faster in the VR simulation than in the standard environment. Thus, generally, despite the lower rate of deviation in VR setting, it is not achieved by driving slower. When the subjects were asked about their personal experiences from the training session, most of the subjects responded that among other things, the VR session caused them to feel a higher sense of commitment to the task and their performance. Some of them even stated that the VR session gave them a real sensation of driving.

  7. Training Surgical Residents With a Haptic Robotic Central Venous Catheterization Simulator.

    PubMed

    Pepley, David F; Gordon, Adam B; Yovanoff, Mary A; Mirkin, Katelin A; Miller, Scarlett R; Han, David C; Moore, Jason Z

    Ultrasound guided central venous catheterization (CVC) is a common surgical procedure with complication rates ranging from 5 to 21 percent. Training is typically performed using manikins that do not simulate anatomical variations such as obesity and abnormal vessel positioning. The goal of this study was to develop and validate the effectiveness of a new virtual reality and force haptic based simulation platform for CVC of the right internal jugular vein. A CVC simulation platform was developed using a haptic robotic arm, 3D position tracker, and computer visualization. The haptic robotic arm simulated needle insertion force that was based on cadaver experiments. The 3D position tracker was used as a mock ultrasound device with realistic visualization on a computer screen. Upon completion of a practice simulation, performance feedback is given to the user through a graphical user interface including scoring factors based on good CVC practice. The effectiveness of the system was evaluated by training 13 first year surgical residents using the virtual reality haptic based training system over a 3 month period. The participants' performance increased from 52% to 96% on the baseline training scenario, approaching the average score of an expert surgeon: 98%. This also resulted in improvement in positive CVC practices including a 61% decrease between final needle tip position and vein center, a decrease in mean insertion attempts from 1.92 to 1.23, and a 12% increase in time spent aspirating the syringe throughout the procedure. A virtual reality haptic robotic simulator for CVC was successfully developed. Surgical residents training on the simulation improved to near expert levels after three robotic training sessions. This suggests that this system could act as an effective training device for CVC. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. Have 3D, Will Travel

    ERIC Educational Resources Information Center

    Duncan, Mike R.; Birrell, Bob; Williams, Toni

    2005-01-01

    Virtual Reality (VR) is primarily a visual technology. Elements such as haptics (touch feedback) and sound can augment an experience, but the visual cues are the prime driver of what an audience will experience from a VR presentation. At its inception in 2001 the Centre for Advanced Visualization (CFAV) at Niagara College of Arts and Technology…

  9. Eye-tracking and EMG supported 3D Virtual Reality - an integrated tool for perceptual and motor development of children with severe physical disabilities: a research concept.

    PubMed

    Pulay, Márk Ágoston

    2015-01-01

    Letting children with severe physical disabilities (like Tetraparesis spastica) to get relevant motional experiences of appropriate quality and quantity is now the greatest challenge for us in the field of neurorehabilitation. These motional experiences may establish many cognitive processes, but may also cause additional secondary cognitive dysfunctions such as disorders in body image, figure invariance, visual perception, auditory differentiation, concentration, analytic and synthetic ways of thinking, visual memory etc. Virtual Reality is a technology that provides a sense of presence in a real environment with the help of 3D pictures and animations formed in a computer environment and enable the person to interact with the objects in that environment. One of our biggest challenges is to find a well suited input device (hardware) to let the children with severe physical disabilities to interact with the computer. Based on our own experiences and a thorough literature review we have come to the conclusion that an effective combination of eye-tracking and EMG devices should work well.

  10. Virtual Reality Simulation of the International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.

  11. Graph theoretical analysis of EEG functional network during multi-workload flight simulation experiment in virtual reality environment.

    PubMed

    Shengqian Zhang; Yuan Zhang; Yu Sun; Thakor, Nitish; Bezerianos, Anastasios

    2017-07-01

    The research field of mental workload has attracted abundant researchers as mental workload plays a crucial role in real-life performance and safety. While previous studies have examined the neural correlates of mental workload in 2D scenarios (i.e., presenting stimuli on a computer screen (CS) environment using univariate methods (e.g., EEG channel power), it is still unclear of the findings of one that uses multivariate approach using graphical theory and the effects of a 3D environment (i.e., presenting stimuli on a Virtual Reality (VR)). In this study, twenty subjects undergo flight simulation in both CS and VR environment with three stages each. After preprocessing, the Electroencephalogram (EEG) signals were a connectivity matrix based on Phase Lag Index (PLI) will be constructed. Graph theory analysis then will be applied based on their global efficiency, local efficiency and nodal efficiency on both alpha and theta band. For global efficiency and local efficiency, VR values are generally lower than CS in both bands. For nodal efficiency, the regions that show at least marginally significant decreases are very different for CS and VR. These findings suggest that 3D simulation effects a higher mental workload than 2D simulation and that they each involved a different brain region.

  12. Virtual Reality and Its Potential Application in Education and Training.

    ERIC Educational Resources Information Center

    Milheim, William D.

    1995-01-01

    An overview is provided of current trends in virtual reality research and development, including discussion of hardware, types of virtual reality, and potential problems with virtual reality. Implications for education and training are explored. (Author/JKP)

  13. A Succinct Overview of Virtual Reality Technology Use in Alzheimer’s Disease

    PubMed Central

    García-Betances, Rebeca I.; Arredondo Waldmeyer, María Teresa; Fico, Giuseppe; Cabrera-Umpiérrez, María Fernanda

    2015-01-01

    We provide a brief review and appraisal of recent and current virtual reality (VR) technology for Alzheimer’s disease (AD) applications. We categorize them according to their intended purpose (e.g., diagnosis, patient cognitive training, caregivers’ education, etc.), focus feature (e.g., spatial impairment, memory deficit, etc.), methodology employed (e.g., tasks, games, etc.), immersion level, and passive or active interaction. Critical assessment indicates that most of them do not yet take full advantage of virtual environments with high levels of immersion and interaction. Many still rely on conventional 2D graphic displays to create non-immersive or semi-immersive VR scenarios. Important improvements are needed to make VR a better and more versatile assessment and training tool for AD. The use of the latest display technologies available, such as emerging head-mounted displays and 3D smart TV technologies, together with realistic multi-sensorial interaction devices, and neuro-physiological feedback capacity, are some of the most beneficial improvements this mini-review suggests. Additionally, it would be desirable that such VR applications for AD be easily and affordably transferable to in-home and nursing home environments. PMID:26029101

  14. Hysteroscopic simulator for training and educational purposes.

    PubMed

    Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer

    2006-01-01

    Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilizes an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organized curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynecological endoscopic training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopic factors as well as current training and accreditation norms, and proposes a hysteroscopic simulator design that is suitable for educating and training.

  15. A Virtual Reality Visualization Tool for Neuron Tracing

    PubMed Central

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio

    2017-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520

  16. A visual graphic/haptic rendering model for hysteroscopic procedures.

    PubMed

    Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer

    2006-03-01

    Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilises an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organised curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynaecologic endoscopy training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopy factors, current training and accreditation, and proposes a hysteroscopic simulator design that is suitable for educating and training.

  17. A Virtual Reality Visualization Tool for Neuron Tracing.

    PubMed

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  18. An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training.

    PubMed

    Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos

    2013-12-01

    Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.

  19. A Virtual Reality-Based Simulation of Abdominal Surgery

    DTIC Science & Technology

    1994-06-30

    415) 591-7881 In! IhNiI 1 SHORT TITLE: A Virtual Reality -Based Simulation of Abdominal Surgery REPORTING PERIOD: October 31, 1993-June 30, 1994 The...Report - A Virtual Reality -Based Simulation Of Abdominal Surgery Page 2 June 21, 1994 TECHNICAL REPORT SUMMARY Virtual Reality is a marriage between...applications of this technology. Virtual reality systems can be used to teach surgical anatomy, diagnose surgical problems, plan operations. simulate and

  20. Amplified Head Rotation in Virtual Reality and the Effects on 3D Search, Training Transfer, and Spatial Orientation.

    PubMed

    Ragan, Eric D; Scerbo, Siroberto; Bacim, Felipe; Bowman, Doug A

    2017-08-01

    Many types of virtual reality (VR) systems allow users to use natural, physical head movements to view a 3D environment. In some situations, such as when using systems that lack a fully surrounding display or when opting for convenient low-effort interaction, view control can be enabled through a combination of physical and virtual turns to view the environment, but the reduced realism could potentially interfere with the ability to maintain spatial orientation. One solution to this problem is to amplify head rotations such that smaller physical turns are mapped to larger virtual turns, allowing trainees to view the entire surrounding environment with small head movements. This solution is attractive because it allows semi-natural physical view control rather than requiring complete physical rotations or a fully-surrounding display. However, the effects of amplified head rotations on spatial orientation and many practical tasks are not well understood. In this paper, we present an experiment that evaluates the influence of amplified head rotation on 3D search, spatial orientation, and cybersickness. In the study, we varied the amount of amplification and also varied the type of display used (head-mounted display or surround-screen CAVE) for the VR search task. By evaluating participants first with amplification and then without, we were also able to study training transfer effects. The findings demonstrate the feasibility of using amplified head rotation to view 360 degrees of virtual space, but noticeable problems were identified when using high amplification with a head-mounted display. In addition, participants were able to more easily maintain a sense of spatial orientation when using the CAVE version of the application, which suggests that visibility of the user's body and awareness of the CAVE's physical environment may have contributed to the ability to use the amplification technique while keeping track of orientation.

  1. Augmented reality-guided artery-first pancreatico-duodenectomy.

    PubMed

    Marzano, Ettore; Piardi, Tullio; Soler, Luc; Diana, Michele; Mutter, Didier; Marescaux, Jacques; Pessaux, Patrick

    2013-11-01

    Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD.

  2. Virtual reality and interactive 3D as effective tools for medical training.

    PubMed

    Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave

    2003-01-01

    CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.

  3. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pathology.

    PubMed

    Hanna, Matthew G; Ahmed, Ishtiaque; Nine, Jeffrey; Prajapati, Shyam; Pantanowitz, Liron

    2018-05-01

    Context Augmented reality (AR) devices such as the Microsoft HoloLens have not been well used in the medical field. Objective To test the HoloLens for clinical and nonclinical applications in pathology. Design A Microsoft HoloLens was tested for virtual annotation during autopsy, viewing 3D gross and microscopic pathology specimens, navigating whole slide images, telepathology, as well as real-time pathology-radiology correlation. Results Pathology residents performing an autopsy wearing the HoloLens were remotely instructed with real-time diagrams, annotations, and voice instruction. 3D-scanned gross pathology specimens could be viewed as holograms and easily manipulated. Telepathology was supported during gross examination and at the time of intraoperative consultation, allowing users to remotely access a pathologist for guidance and to virtually annotate areas of interest on specimens in real-time. The HoloLens permitted radiographs to be coregistered on gross specimens and thereby enhanced locating important pathologic findings. The HoloLens also allowed easy viewing and navigation of whole slide images, using an AR workstation, including multiple coregistered tissue sections facilitating volumetric pathology evaluation. Conclusions The HoloLens is a novel AR tool with multiple clinical and nonclinical applications in pathology. The device was comfortable to wear, easy to use, provided sufficient computing power, and supported high-resolution imaging. It was useful for autopsy, gross and microscopic examination, and ideally suited for digital pathology. Unique applications include remote supervision and annotation, 3D image viewing and manipulation, telepathology in a mixed-reality environment, and real-time pathology-radiology correlation.

  4. Sculpting 3D worlds with music: advanced texturing techniques

    NASA Astrophysics Data System (ADS)

    Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.

    1996-04-01

    Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.

  5. Crushing virtual cigarettes reduces tobacco addiction and treatment discontinuation.

    PubMed

    Girard, Benoit; Turcotte, Vincent; Bouchard, Stéphane; Girard, Bruno

    2009-10-01

    Pilot studies revealed promising results regarding crushing virtual cigarettes to reduce tobacco addiction. In this study, 91 regular smokers were randomly assigned to two treatment conditions that differ only by the action performed in the virtual environment: crushing virtual cigarettes or grasping virtual balls. All participants also received minimal psychosocial support from nurses during each of 12 visits to the clinic. An affordable virtual reality system was used (eMagin HMD) with a virtual environment created by modifying a 3D game. Results revealed that crushing virtual cigarettes during 4 weekly sessions led to a statistically significant reduction in nicotine addiction (assessed with the Fagerström test), abstinence rate (confirmed with exhaled carbon monoxide), and drop-out rate from the 12-week psychosocial minimal-support treatment program. Increased retention in the program is discussed as a potential explanation for treatment success, and hypotheses are raised about self-efficacy, motivation, and learning.

  6. Mobile Applications and Multi-User Virtual Reality Simulations

    NASA Technical Reports Server (NTRS)

    Gordillo, Orlando Enrique

    2016-01-01

    This is my third internship with NASA and my second one at the Johnson Space Center. I work within the engineering directorate in ER7 (Software Robotics and Simulations Division) at a graphics lab called IGOAL. We are a very well-rounded lab because we have dedicated software developers and dedicated 3D artist, and when you combine the two, what you get is the ability to create many different things such as interactive simulations, 3D models, animations, and mobile applications.

  7. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...free space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...generates and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a

  8. Three-Dimensional Tactical Display and Method for Visualizing Data with a Probability of Uncertainty

    DTIC Science & Technology

    2009-08-03

    replacing the more complex and less intuitive displays presently provided in such contexts as commercial aircraft , marine vehicles, and air traffic...space-virtual reality, 3-D image display system which is enabled by using a unique form of Aerogel as the primary display media. A preferred...and displays a real 3-D image in the Aerogel matrix. [0014] U.S. Patent No. 6,285,317, issued September 4, 2001, to Ong, discloses a navigation

  9. 3D liver volume reconstructed for palpation training.

    PubMed

    Tibamoso, Gerardo; Perez-Gutierrez, Byron; Uribe-Quevedo, Alvaro

    2013-01-01

    Virtual Reality systems for medical procedures such as the palpation of different organs, requires fast, robust, accurate and reliable computational methods for providing realism during interaction with the 3D biological models. This paper presents the segmentation, reconstruction and palpation simulation of a healthy liver volume as a tool for training. The chosen method considers the mechanical characteristics and liver properties for correctly simulating palpation interactions, which results appropriate as a complementary tool for training medical students in familiarizing with the liver anatomy.

  10. Augmented Reality to Preserve Hidden Vestiges in Historical Cities. a Case Study

    NASA Astrophysics Data System (ADS)

    Martínez, J. L.; Álvareza, S.; Finat, J.; Delgado, F. J.; Finat, J.

    2015-02-01

    Mobile devices provide an increasingly sophisticated support to enhanced experiences and understanding the remote past in an interactive way. The use of augmented reality technologies allows to develop mobile applications for indoor exploration of virtually reconstructed archaeological places. In our work we have built a virtual reconstruction of a Roman Villa with data arising from an urgent partial excavation which were performed in order to build a car parking in the historical city of Valladolid (Spain). In its current state, the archaeological site is covered by an urban garden. Localization and tracking are performed using a combination of GPS and inertial sensors of the mobile device. In this work we prove how to perform an interactive navigation around the 3D virtual model showing an interpretation of the way it was. The user experience is enhanced by answering some simple questions, performing minor tasks and puzzles which are presented with multimedia contents linked to key features of the archaeological site.

  11. Segeberg 1600 - Reconstructing a Historic Town for Virtual Reality Visualisation as AN Immersive Experience

    NASA Astrophysics Data System (ADS)

    Deggim, S.; Kersten, T. P.; Tschirschwitz, F.; Hinrichsen, N.

    2017-11-01

    The 3D reconstruction of historic buildings and cities offers an opportunity to experience the history of relevant objects and their development over the centuries. Digital visualisations of such historic objects allow for a more natural view of history as well as showing information that is not possible in a real world setting. New presentation forms, such as the virtual reality (VR) system HTC Vive, can be used to disseminate information in another dimension and simplify the access by changing the user's viewpoint from a listener and viewer into being an integrated part of an interactive situation. In general, this approach is a combination of education and entertainment, also known as "edutainment" or "gamification", a term used in the education sector as describing where motivation to learn is encouraged through adding a competitive element. It is thus a step away from simple consumption of information towards experiencing information and a more literal interpretation of "living history". In this contribution, we present the development of a 3D reconstruction of the two towns Segeberg and Gieschenhagen (today: Bad Segeberg) in Schleswig-Holstein, Germany in the Early Modern Age around 1600. The historic landscape and its conversion from a reconstructed virtual town model into an interactive VR application is also described. The reconstruction is based on a recent digital terrain model as well as survey data of surviving buildings, historic visual information based on historic drawings and written accounts from that era. All datasets are combined to a single walkable virtual world that spans approximately 3 km2.

  12. Capturing differences in dental training using a virtual reality simulator.

    PubMed

    Mirghani, I; Mushtaq, F; Allsop, M J; Al-Saud, L M; Tickhill, N; Potter, C; Keeling, A; Mon-Williams, M A; Manogue, M

    2018-02-01

    Virtual reality simulators are becoming increasingly popular in dental schools across the world. But to what extent do these systems reflect actual dental ability? Addressing this question of construct validity is a fundamental step that is necessary before these systems can be fully integrated into a dental school's curriculum. In this study, we examined the sensitivity of the Simodont (a haptic virtual reality dental simulator) to differences in dental training experience. Two hundred and eighty-nine participants, with 1 (n = 92), 3 (n = 79), 4 (n = 57) and 5 (n = 61) years of dental training, performed a series of tasks upon their first exposure to the simulator. We found statistically significant differences between novice (Year 1) and experienced dental trainees (operationalised as 3 or more years of training), but no differences between performance of experienced trainees with varying levels of experience. This work represents a crucial first step in understanding the value of haptic virtual reality simulators in dental education. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. The Selimiye Mosque of Edirne, Turkey - AN Immersive and Interactive Virtual Reality Experience Using Htc Vive

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.

    2017-05-01

    Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.

  14. The Use of Virtual Reality to Facilitate Mindfulness Skills Training in Dialectical Behavioral Therapy for Borderline Personality Disorder: A Case Study

    PubMed Central

    Nararro-Haro, Maria V.; Hoffman, Hunter G.; Garcia-Palacios, Azucena; Sampaio, Mariana; Alhalabi, Wadee; Hall, Karyn; Linehan, Marsha

    2016-01-01

    Borderline personality disorder (BPD) is a severe mental disorder characterized by a dysfunctional pattern of affective instability, impulsivity, and disturbed interpersonal relationships. Dialectical Behavior Therapy (DBT®) is the most effective treatment for Borderline Personality Disorder, but demand for DBT® far exceeds existing clinical resources. Most patients with BPD never receive DBT®. Incorporating computer technology into the DBT® could help increase dissemination. Immersive Virtual Reality technology (VR) is becoming widely available to mainstream consumers. This case study explored the feasibility/clinical potential of using immersive virtual reality technology to enhance DBT® mindfulness skills training of a 32 year old female diagnosed with BPD. Prior to using VR, the patient experienced difficulty practicing DBT® mindfulness due to her emotional reactivity, and difficulty concentrating. To help the patient focus her attention, and to facilitate DBT® mindfulness skills learning, the patient looked into virtual reality goggles, and had the illusion of slowly “floating down” a 3D computer-generated river while listening to DBT® mindfulness training audios. Urges to commit suicide, urges to self harm, urges to quit therapy, urges to use substances, and negative emotions were all reduced after each VR mindfulness session and VR mindfulness was well accepted/liked by the patient. Although case studies are scientifically inconclusive by nature, results from this feasibility study were encouraging. Future controlled studies are needed to quantify whether VR-enhanced mindfulness training has long term benefits e.g., increasing patient acceptance and/or improving therapeutic outcome. Computerizing some of the DBT® skills treatment modules would reduce cost and increase dissemination. PMID:27853437

  15. The Use of Virtual Reality to Facilitate Mindfulness Skills Training in Dialectical Behavioral Therapy for Borderline Personality Disorder: A Case Study.

    PubMed

    Nararro-Haro, Maria V; Hoffman, Hunter G; Garcia-Palacios, Azucena; Sampaio, Mariana; Alhalabi, Wadee; Hall, Karyn; Linehan, Marsha

    2016-01-01

    Borderline personality disorder (BPD) is a severe mental disorder characterized by a dysfunctional pattern of affective instability, impulsivity, and disturbed interpersonal relationships. Dialectical Behavior Therapy (DBT®) is the most effective treatment for Borderline Personality Disorder, but demand for DBT® far exceeds existing clinical resources. Most patients with BPD never receive DBT®. Incorporating computer technology into the DBT® could help increase dissemination. Immersive Virtual Reality technology (VR) is becoming widely available to mainstream consumers. This case study explored the feasibility/clinical potential of using immersive virtual reality technology to enhance DBT® mindfulness skills training of a 32 year old female diagnosed with BPD. Prior to using VR, the patient experienced difficulty practicing DBT® mindfulness due to her emotional reactivity, and difficulty concentrating. To help the patient focus her attention, and to facilitate DBT® mindfulness skills learning, the patient looked into virtual reality goggles, and had the illusion of slowly "floating down" a 3D computer-generated river while listening to DBT® mindfulness training audios. Urges to commit suicide, urges to self harm, urges to quit therapy, urges to use substances, and negative emotions were all reduced after each VR mindfulness session and VR mindfulness was well accepted/liked by the patient. Although case studies are scientifically inconclusive by nature, results from this feasibility study were encouraging. Future controlled studies are needed to quantify whether VR-enhanced mindfulness training has long term benefits e.g., increasing patient acceptance and/or improving therapeutic outcome. Computerizing some of the DBT® skills treatment modules would reduce cost and increase dissemination.

  16. Virtual Reality as Innovative Approach to the Interior Designing

    NASA Astrophysics Data System (ADS)

    Kaleja, Pavol; Kozlovská, Mária

    2017-06-01

    We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies

  17. Virtual reality training for surgical trainees in laparoscopic surgery.

    PubMed

    Nagendran, Myura; Gurusamy, Kurinchi Selvan; Aggarwal, Rajesh; Loizidou, Marilena; Davidson, Brian R

    2013-08-27

    Standard surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time-consuming, costly, and of variable effectiveness. Training using a virtual reality simulator is an option to supplement standard training. Virtual reality training improves the technical skills of surgical trainees such as decreased time for suturing and improved accuracy. The clinical impact of virtual reality training is not known. To assess the benefits (increased surgical proficiency and improved patient outcomes) and harms (potentially worse patient outcomes) of supplementary virtual reality training of surgical trainees with limited laparoscopic experience. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library, MEDLINE, EMBASE and Science Citation Index Expanded until July 2012. We included all randomised clinical trials comparing virtual reality training versus other forms of training including box-trainer training, no training, or standard laparoscopic training in surgical trainees with little laparoscopic experience. We also planned to include trials comparing different methods of virtual reality training. We included only trials that assessed the outcomes in people undergoing laparoscopic surgery. Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager 5 analysis. For each outcome we calculated the mean difference (MD) or standardised mean difference (SMD) with 95% confidence intervals based on intention-to-treat analysis. We included eight trials covering 109 surgical trainees with limited laparoscopic experience. Of the eight trials, six compared virtual reality versus no supplementary training. One trial compared virtual reality training versus box-trainer training and versus no supplementary training, and one trial compared virtual reality training versus box-trainer training. There were no trials that compared different forms of virtual reality training. All the trials were at high risk of bias. Operating time and operative performance were the only outcomes reported in the trials. The remaining outcomes such as mortality, morbidity, quality of life (the primary outcomes of this review) and hospital stay (a secondary outcome) were not reported. Virtual reality training versus no supplementary training: The operating time was significantly shorter in the virtual reality group than in the no supplementary training group (3 trials; 49 participants; MD -11.76 minutes; 95% CI -15.23 to -8.30). Two trials that could not be included in the meta-analysis also showed a reduction in operating time (statistically significant in one trial). The numerical values for operating time were not reported in these two trials. The operative performance was significantly better in the virtual reality group than the no supplementary training group using the fixed-effect model (2 trials; 33 participants; SMD 1.65; 95% CI 0.72 to 2.58). The results became non-significant when the random-effects model was used (2 trials; 33 participants; SMD 2.14; 95% CI -1.29 to 5.57). One trial could not be included in the meta-analysis as it did not report the numerical values. The authors stated that the operative performance of virtual reality group was significantly better than the control group. Virtual reality training versus box-trainer training: The only trial that reported operating time did not report the numerical values. In this trial, the operating time in the virtual reality group was significantly shorter than in the box-trainer group. Of the two trials that reported operative performance, only one trial reported the numerical values. The operative performance was significantly better in the virtual reality group than in the box-trainer group (1 trial; 19 participants; SMD 1.46; 95% CI 0.42 to 2.50). In the other trial that did not report the numerical values, the authors stated that the operative performance in the virtual reality group was significantly better than the box-trainer group. Virtual reality training appears to decrease the operating time and improve the operative performance of surgical trainees with limited laparoscopic experience when compared with no training or with box-trainer training. However, the impact of this decreased operating time and improvement in operative performance on patients and healthcare funders in terms of improved outcomes or decreased costs is not known. Further well-designed trials at low risk of bias and random errors are necessary. Such trials should assess the impact of virtual reality training on clinical outcomes.

  18. Mechanism of Kinect-based virtual reality training for motor functional recovery of upper limbs after subacute stroke.

    PubMed

    Bao, Xiao; Mao, Yurong; Lin, Qiang; Qiu, Yunhai; Chen, Shaozhen; Li, Le; Cates, Ryan S; Zhou, Shufeng; Huang, Dongfeng

    2013-11-05

    The Kinect-based virtual reality system for the Xbox 360 enables users to control and interact with the game console without the need to touch a game controller, and provides rehabilitation training for stroke patients with lower limb dysfunctions. However, the underlying mechanism remains unclear. In this study, 18 healthy subjects and five patients after subacute stroke were included. The five patients were scanned using functional MRI prior to training, 3 weeks after training and at a 12-week follow-up, and then compared with healthy subjects. The Fugl-Meyer Assessment and Wolf Motor Function Test scores of the hemiplegic upper limbs of stroke patients were significantly increased 3 weeks after training and at the 12-week follow-up. Functional MRI results showed that contralateral primary sensorimotor cortex was activated after Kinect-based virtual reality training in the stroke patients compared with the healthy subjects. Contralateral primary sensorimotor cortex, the bilateral supplementary motor area and the ipsilateral cerebellum were also activated during hand-clenching in all 18 healthy subjects. Our findings indicate that Kinect-based virtual reality training could promote the recovery of upper limb motor function in subacute stroke patients, and brain reorganization by Kinect-based virtual reality training may be linked to the contralateral sensorimotor cortex.

  19. Mechanism of Kinect-based virtual reality training for motor functional recovery of upper limbs after subacute stroke

    PubMed Central

    Bao, Xiao; Mao, Yurong; Lin, Qiang; Qiu, Yunhai; Chen, Shaozhen; Li, Le; Cates, Ryan S.; Zhou, Shufeng; Huang, Dongfeng

    2013-01-01

    The Kinect-based virtual reality system for the Xbox 360 enables users to control and interact with the game console without the need to touch a game controller, and provides rehabilitation training for stroke patients with lower limb dysfunctions. However, the underlying mechanism remains unclear. In this study, 18 healthy subjects and five patients after subacute stroke were included. The five patients were scanned using functional MRI prior to training, 3 weeks after training and at a 12-week follow-up, and then compared with healthy subjects. The Fugl-Meyer Assessment and Wolf Motor Function Test scores of the hemiplegic upper limbs of stroke patients were significantly increased 3 weeks after training and at the 12-week follow-up. Functional MRI results showed that contralateral primary sensorimotor cortex was activated after Kinect-based virtual reality training in the stroke patients compared with the healthy subjects. Contralateral primary sensorimotor cortex, the bilateral supplementary motor area and the ipsilateral cerebellum were also activated during hand-clenching in all 18 healthy subjects. Our findings indicate that Kinect-based virtual reality training could promote the recovery of upper limb motor function in subacute stroke patients, and brain reorganization by Kinect-based virtual reality training may be linked to the contralateral sensorimotor cortex. PMID:25206611

  20. An intelligent virtual human system for providing healthcare information and support.

    PubMed

    Rizzo, Albert A; Lange, Belinda; Buckwalter, John G; Forbell, Eric; Kim, Julia; Sagae, Kenji; Williams, Josh; Rothbaum, Barbara O; Difede, JoAnn; Reger, Greg; Parsons, Thomas; Kenny, Patrick

    2011-01-01

    Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality simulation technology for clinical purposes. Shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality with the "birth" of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive, artificially intelligent and natural language capable virtual human agents that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, virtual humans can be designed to perceive and act in a 3D virtual world, engage in spoken dialogues with real users and can be capable of exhibiting human-like emotional reactions. This paper will present an overview of the SimCoach project that aims to develop virtual human support agents to serve as online guides for promoting access to psychological healthcare information and for assisting military personnel and family members in breaking down barriers to initiating care. The SimCoach experience is being designed to attract and engage military Service Members, Veterans and their significant others who might not otherwise seek help with a live healthcare provider. It is expected that this experience will motivate users to take the first step--to empower themselves to seek advice and information regarding their healthcare and general personal welfare and encourage them to take the next step towards seeking more formal resources if needed.

  1. The virtues of virtual reality in exposure therapy.

    PubMed

    Gega, Lina

    2017-04-01

    Virtual reality can be more effective and less burdensome than real-life exposure. Optimal virtual reality delivery should incorporate in situ direct dialogues with a therapist, discourage safety behaviours, allow for a mismatch between virtual and real exposure tasks, and encourage self-directed real-life practice between and beyond virtual reality sessions. © The Royal College of Psychiatrists 2017.

  2. Virtual Reality in the Classroom.

    ERIC Educational Resources Information Center

    Pantelidis, Veronica S.

    1993-01-01

    Considers the concept of virtual reality; reviews its history; describes general uses of virtual reality, including entertainment, medicine, and design applications; discusses classroom uses of virtual reality, including a software program called Virtus WalkThrough for use with a computer monitor; and suggests future possibilities. (34 references)…

  3. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  4. Development of a virtual reality training system for endoscope-assisted submandibular gland removal.

    PubMed

    Miki, Takehiro; Iwai, Toshinori; Kotani, Kazunori; Dang, Jianwu; Sawada, Hideyuki; Miyake, Minoru

    2016-11-01

    Endoscope-assisted surgery has widely been adopted as a basic surgical procedure, with various training systems using virtual reality developed for this procedure. In the present study, a basic training system comprising virtual reality for the removal of submandibular glands under endoscope assistance was developed. The efficacy of the training system was verified in novice oral surgeons. A virtual reality training system was developed using existing haptic devices. Virtual reality models were constructed from computed tomography data to ensure anatomical accuracy. Novice oral surgeons were trained using the developed virtual reality training system. The developed virtual reality training system included models of the submandibular gland and surrounding connective tissues and blood vessels entering the submandibular gland. Cutting or abrasion of the connective tissue and manipulations, such as elevation of blood vessels, were reproduced by the virtual reality system. A training program using the developed system was devised. Novice oral surgeons were trained in accordance with the devised training program. Our virtual reality training system for endoscope-assisted removal of the submandibular gland is effective in the training of novice oral surgeons in endoscope-assisted surgery. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. Virtual Reality: Toward Fundamental Improvements in Simulation-Based Training.

    ERIC Educational Resources Information Center

    Thurman, Richard A.; Mattoon, Joseph S.

    1994-01-01

    Considers the role and effectiveness of virtual reality in simulation-based training. The theoretical and practical implications of verity, integration, and natural versus artificial interface are discussed; a three-dimensional classification scheme for virtual reality is described; and the relationship between virtual reality and other…

  6. Virtual Reality in Schools: The Ultimate Educational Technology.

    ERIC Educational Resources Information Center

    Reid, Robert D.; Sykes, Wylmarie

    1999-01-01

    Discusses the use of virtual reality as an educational tool. Highlights include examples of virtual reality in public schools that lead to a more active learning process, simulated environments, integrating virtual reality into any curriculum, benefits to teachers and students, and overcoming barriers to implementation. (LRW)

  7. The combined use of virtual reality exposure in the treatment of agoraphobia.

    PubMed

    Pitti, Carmen T; Peñate, Wenceslao; de la Fuente, Juan; Bethencourt, Juan M; Roca-Sánchez, María J; Acosta, Leopoldo; Villaverde, María L; Gracia, Ramón

    2015-01-01

    This study compares the differential efficacy of three groups of treatments for agoraphobia: paroxetine combined with cognitive-behavioral therapy, paroxetine combined with cognitive-behavioral therapy and virtual reality exposure, and a group with only paroxetine. 99 patients with agoraphobia were finally selected. Both combined treatment groups received 11 sessions of cognitive-behavioral therapy, and one of the groups was also exposed to 4 sessions of virtual reality treatment. Treatments were applied in individual sessions once a week for 3 months. The three treatment groups showed statistically significant improvements. In some measures, combined treatment groups showed greater improvements. The virtual reality exposure group showed greater improvement confronting phobic stimuli. Treatments combining psychopharmacological and psychological therapy showed greater efficacy. Although the use of new technologies led to greater improvement, treatment adherence problems still remain.

  8. The need for virtual reality simulators in dental education: A review.

    PubMed

    Roy, Elby; Bakr, Mahmoud M; George, Roy

    2017-04-01

    Virtual reality simulators are becoming an essential part of modern education. The benefits of Virtual reality in dentistry is constantly being assessed as a method or an adjunct to improve fine motor skills, hand-eye coordination in pre-clinical settings and overcome the monetary and intellectual challenges involved with such training. This article, while providing an overview of the virtual reality dental simulators, also looks at the link between virtual reality simulation and current pedagogical knowledge.

  9. Walking training associated with virtual reality-based training increases walking speed of individuals with chronic stroke: systematic review with meta-analysis.

    PubMed

    Rodrigues-Baroni, Juliana M; Nascimento, Lucas R; Ada, Louise; Teixeira-Salmela, Luci F

    2014-01-01

    To systematically review the available evidence on the efficacy of walking training associated with virtual reality-based training in patients with stroke. The specific questions were: Is walking training associated with virtual reality-based training effective in increasing walking speed after stroke? Is this type of intervention more effective in increasing walking speed, than non-virtual reality-based walking interventions? A systematic review with meta-analysis of randomized clinical trials was conducted. Participants were adults with chronic stroke and the experimental intervention was walking training associated with virtual reality-based training to increase walking speed. The outcome data regarding walking speed were extracted from the eligible trials and were combined using a meta-analysis approach. Seven trials representing eight comparisons were included in this systematic review. Overall, the virtual reality-based training increased walking speed by 0.17 m/s (IC 95% 0.08 to 0.26), compared with placebo/nothing or non-walking interventions. In addition, the virtual reality-based training increased walking speed by 0.15 m/s (IC 95% 0.05 to 0.24), compared with non-virtual reality walking interventions. This review provided evidence that walking training associated with virtual reality-based training was effective in increasing walking speed after stroke, and resulted in better results than non-virtual reality interventions.

  10. Walking training associated with virtual reality-based training increases walking speed of individuals with chronic stroke: systematic review with meta-analysis

    PubMed Central

    Rodrigues-Baroni, Juliana M.; Nascimento, Lucas R.; Ada, Louise; Teixeira-Salmela, Luci F.

    2014-01-01

    OBJECTIVE: To systematically review the available evidence on the efficacy of walking training associated with virtual reality-based training in patients with stroke. The specific questions were: Is walking training associated with virtual reality-based training effective in increasing walking speed after stroke? Is this type of intervention more effective in increasing walking speed, than non-virtual reality-based walking interventions? METHOD: A systematic review with meta-analysis of randomized clinical trials was conducted. Participants were adults with chronic stroke and the experimental intervention was walking training associated with virtual reality-based training to increase walking speed. The outcome data regarding walking speed were extracted from the eligible trials and were combined using a meta-analysis approach. RESULTS: Seven trials representing eight comparisons were included in this systematic review. Overall, the virtual reality-based training increased walking speed by 0.17 m/s (IC 95% 0.08 to 0.26), compared with placebo/nothing or non-walking interventions. In addition, the virtual reality-based training increased walking speed by 0.15 m/s (IC 95% 0.05 to 0.24), compared with non-virtual reality walking interventions. CONCLUSIONS: This review provided evidence that walking training associated with virtual reality-based training was effective in increasing walking speed after stroke, and resulted in better results than non-virtual reality interventions. PMID:25590442

  11. Towards the Enhancement of "MINOR" Archaeological Heritage

    NASA Astrophysics Data System (ADS)

    Morandi, S.; Tremari, M.; Mandelli, A.

    2017-02-01

    The research is an analysis of the recording, reconstruction and visualisation of the 3D data of a XVIII century watermill, identified in an emergency archaeological excavation during the construction of the mini-hydroelectric plant on the bank of the Adda river in the municipality of Pizzighettone (Cremona, Lombardy, Italy). The work examines the use and the potentials of modern digital 3D modelling techniques applied to archaeological heritage aimed to increase the research, maintenance and presentation with interactive products. The use of three-dimensional models managed through AR (Augmented Reality) and VR (Virtual Reality) technologies with mobile devices gives several opportunities in the field of study and communication. It also improves on-site exploration of the landscape, enhancing the "minor" archaeological sites, daily subjected to numerous emergency works and facilitating the understanding of heritage sites.

  12. Virtual reality technique to assist measurement of degree of shaking of two minarets of an ancient building

    NASA Astrophysics Data System (ADS)

    Homainejad, Amir S.; Satari, Mehran

    2000-05-01

    VR is possible which brings users to the reality by computer and VE is a simulated world which takes users to any points and directions of the object. VR and VE can be very useful if accurate and precise data are sued, and allows users to work with realistic model. Photogrammetry is a technique which is able to collect and provide accurate and precise data for building 3D model in a computer. Data can be collected from various sensor and cameras, and methods of data collector are vary based on the method of image acquiring. Indeed VR includes real-time graphics, 3D model, and display and it has application in the entertainment industry, flight simulators, industrial design.

  13. An Interactive Augmented Reality Implementation of Hijaiyah Alphabet for Children Education

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Akbar, F.; Syahputra, M. F.; Budiman, M. A.; Hizriadi, A.

    2018-03-01

    Hijaiyah alphabet is letters used in the Qur’an. An attractive and exciting learning process of Hijaiyah alphabet is necessary for the children. One of the alternatives to create attractive and interesting learning process of Hijaiyah alphabet is to develop it into a mobile application using augmented reality technology. Augmented reality is a technology that combines two-dimensional or three-dimensional virtual objects into actual three-dimensional circles and projects them in real time. The purpose of application aims to foster the children interest in learning Hijaiyah alphabet. This application is using Smartphone and marker as the medium. It was built using Unity and augmented reality library, namely Vuforia, then using Blender as the 3D object modeling software. The output generated from this research is the learning application of Hijaiyah letters using augmented reality. How to use it is as follows: first, place marker that has been registered and printed; second, the smartphone camera will track the marker. If the marker is invalid, the user should repeat the tracking process. If the marker is valid and identified, the marker will have projected the objects of Hijaiyah alphabet in three-dimensional form. Lastly, the user can learn and understand the shape and pronunciation of Hijaiyah alphabet by touching the virtual button on the marker

  14. Building interactive virtual environments for simulated training in medicine using VRML and Java/JavaScript.

    PubMed

    Korocsec, D; Holobar, A; Divjak, M; Zazula, D

    2005-12-01

    Medicine is a difficult thing to learn. Experimenting with real patients should not be the only option; simulation deserves a special attention here. Virtual Reality Modelling Language (VRML) as a tool for building virtual objects and scenes has a good record of educational applications in medicine, especially for static and animated visualisations of body parts and organs. However, to create computer simulations resembling situations in real environments the required level of interactivity and dynamics is difficult to achieve. In the present paper we describe some approaches and techniques which we used to push the limits of the current VRML technology further toward dynamic 3D representation of virtual environments (VEs). Our demonstration is based on the implementation of a virtual baby model, whose vital signs can be controlled from an external Java application. The main contributions of this work are: (a) outline and evaluation of the three-level VRML/Java implementation of the dynamic virtual environment, (b) proposal for a modified VRML Timesensor node, which greatly improves the overall control of system performance, and (c) architecture of the prototype distributed virtual environment for training in neonatal resuscitation comprising the interactive virtual newborn, active bedside monitor for vital signs and full 3D representation of the surgery room.

  15. Innovative virtual reality measurements for embryonic growth and development.

    PubMed

    Verwoerd-Dikkeboom, C M; Koning, A H J; Hop, W C; van der Spek, P J; Exalto, N; Steegers, E A P

    2010-06-01

    Innovative imaging techniques, using up-to-date ultrasonic equipment, necessitate specific biometry. The aim of our study was to test the possibility of detailed human embryonic biometry using a virtual reality (VR) technique. In a longitudinal study, three-dimensional (3D) measurements were performed from 6 to 14 weeks gestational age in 32 pregnancies (n = 16 spontaneous conception, n = 16 IVF/ICSI). A total of 125 3D volumes were analysed in the I-Space VR system, which allows binocular depth perception, providing a realistic 3D illusion. Crown-rump length (CRL), biparietal diameter (BPD), occipito-frontal diameter (OFD), head circumference (HC) and abdominal circumference (AC) were measured as well as arm length, shoulder width, elbow width, hip width and knee width. CRL, BPD, OFD and HC could be measured in more than 96% of patients, and AC in 78%. Shoulder width, elbow width, hip width and knee width could be measured in more than 95% of cases, and arm length in 82% of cases. Growth curves were constructed for all variables. Ear and foot measurements were only possible beyond 9 weeks gestation. This study provides a detailed, longitudinal description of normal human embryonic growth, facilitated by a VR system. Growth curves were created for embryonic biometry of the CRL, BPD, HC and AC early in pregnancy and also of several 'new' biometric measurements. Applying virtual embryoscopy will enable us to diagnose growth and/or developmental delay earlier and more accurately. This is especially important for pregnancies at risk of severe complications, such as recurrent late miscarriage and early growth restriction.

  16. VR-Based Gamification of Communication Training and Oral Examination in a Second Language

    ERIC Educational Resources Information Center

    Reitz, Liesa; Sohny, Aline; Lochmann, Gerrit

    2016-01-01

    The authors present a novel way of oral language training by embedding the English as a foreign language (EFL) learning process into a generic 3D Cooperative Virtual Reality (VR) Game. Due to lack of time, resources and innovation, the language classroom is limited in its possibilities of promoting authentic communication. Therefore, the…

  17. Therapists' perception of benefits and costs of using virtual reality treatments.

    PubMed

    Segal, Robert; Bhatia, Maneet; Drapeau, Martin

    2011-01-01

    Research indicates that virtual reality is effective in the treatment of many psychological difficulties and is being used more frequently. However, little is known about therapists' perception of the benefits and costs related to the use of virtual therapy in treatment delivery. In the present study, 271 therapists completed an online questionnaire that assessed their perceptions about the potential benefits and costs of using virtual reality in psychotherapy. Results indicated that therapists perceived the potential benefits as outweighing the potential costs. Therapists' self-reported knowledge of virtual reality, theoretical orientation, and interest in using virtual reality were found to be associated with perceptual measures. These findings contribute to the current knowledge of the perception of virtual reality amongst psychotherapists.

  18. Using Virtual Reality to Provide Health Care Information to People With Intellectual Disabilities: Acceptability, Usability, and Potential Utility

    PubMed Central

    Conboy-Hill, Suzanne; Taylor, Dave

    2011-01-01

    Background People with intellectual disabilities have poor access to health care, which may be further compromised by a lack of accessible health information. To be effective, health information must be easily understood and remembered. People with intellectual disabilities learn better from multimodal information sources, and virtual reality offers a 3-dimensional (3D) computer-generated environment that can be used for providing information and learning. To date, research into virtual reality experiences for people with intellectual disabilities has been limited to skill-based training and leisure opportunities within the young to mid age ranges. Objective This study assessed the acceptability, usability, and potential utility of a virtual reality experience as a means of providing health care-related information to people with intellectual disabilities. We designed a prototype multimodal experience based on a hospital scenario and situated on an island in the Second Life 3D virtual world. We wanted to know how people of different ages and with varying levels of cognitive function would participate in the customized virtual environment, what they understood from being there, and what they remembered a week later. Methods The study drew on qualitative data. We used a participatory research approach that involved working alongside people with intellectual disabilities and their supporters in a community setting. Cognitive function was assessed, using the Matrix Analogies Test and the British Picture Vocabulary Scale, to describe the sample. Participants, supported by facilitators, were video recorded accessing and engaging with the virtual environment. We assessed recall 1 week later, using a specialized interview technique. Data were downloaded into NVivo 8 and analyzed using the framework analysis technique. Results Study participants were 20 people aged between 20 and 80 years with mild to severe intellectual disabilities. All participants were able to access the environment and voluntarily stayed there for between 23 and 57 minutes. With facilitator support, all participants moved the avatar themselves. Participants engaged with the scenario as if they were actually there, indicating cognitive presence. Some referred back to previous medical experiences, indicating the potential for experiential knowledge to become the foundation of new learning and retention of knowledge. When interviewed, all participants remembered some aspects of the environment. Conclusions A sample of adults with intellectual disabilities of all ages, and with varying levels of cognitive function, accessed and enjoyed a virtual-world environment that drew on a health care-related scenario, and remembered aspects of it a week later. The small sample size limits generalizability of findings, but the potential shown for experiential learning to aid retention of knowledge on which consent is based appears promising. Successfully delivering health care-related information in a non-National Health Service setting indicates potential for delivery in institutional, community, or home settings, thereby widening access to the information. PMID:22082765

  19. Improving Big Data Visual Analytics with Interactive Virtual Reality

    DTIC Science & Technology

    2015-05-22

    gain a better understanding of data include scalable zooms, dynamic filtering, and anno - tation. Below, we describe some tasks that can be performed...pages 609–614. IEEE, 2014. [13] Matt R Fetterman, Zachary J Weber, Robert Freking, Alessio Volpe, D Scott, et al. Luminocity: a 3d printed, illuminated...Institute for Business Valueexecutive report, IBM Institute for Business Value, 2012. [24] James J Thomas. Illuminating the path:[the research and

  20. The Perceptions of CEIT Postgraduate Students Regarding Reality Concepts: Augmented, Virtual, Mixed and Mirror Reality

    ERIC Educational Resources Information Center

    Taçgin, Zeynep; Arslan, Ahmet

    2017-01-01

    The purpose of this study is to determine perception of postgraduate Computer Education and Instructional Technologies (CEIT) students regarding the concepts of Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Augmented Virtuality (AV) and Mirror Reality; and to offer a table that includes differences and similarities between…

  1. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  2. WWW creates new interactive 3D graphics and collaborative environments for medical research and education.

    PubMed

    Samothrakis, S; Arvanitis, T N; Plataniotis, A; McNeill, M D; Lister, P F

    1997-11-01

    Virtual Reality Modelling Language (VRML) is the start of a new era for medicine and the World Wide Web (WWW). Scientists can use VRML across the Internet to explore new three-dimensional (3D) worlds, share concepts and collaborate together in a virtual environment. VRML enables the generation of virtual environments through the use of geometric, spatial and colour data structures to represent 3D objects and scenes. In medicine, researchers often want to interact with scientific data, which in several instances may also be dynamic (e.g. MRI data). This data is often very large and is difficult to visualise. A 3D graphical representation can make the information contained in such large data sets more understandable and easier to interpret. Fast networks and satellites can reliably transfer large data sets from computer to computer. This has led to the adoption of remote tale-working in many applications including medical applications. Radiology experts, for example, can view and inspect in near real-time a 3D data set acquired from a patient who is in another part of the world. Such technology is destined to improve the quality of life for many people. This paper introduces VRML (including some technical details) and discusses the advantages of VRML in application developing.

  3. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  4. Improving Balance in TBI Using a Low Cost Customized Virtual Reality Rehabilitation Tool

    DTIC Science & Technology

    2015-10-01

    INVESTIGATOR(S): Denise Krch, PhD CONTRACTING ORGANIZATION: Kessler Foundation West Orange , NJ 07052 REPORT DATE: October 2015 TYPE OF REPORT...Z39.18 email: dkrch@kesslerfoundation.org Kessler Foundation West Orange , NJ 07052 14 3 Table of Contents Page 1. Introduction 3 2. Keywords 3 3... Orange , NJ, USA  Significant contribution to the manualization of the Standard of Care and the Mystic Isle treatment protocols.  Training clinical

  5. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  6. Wireless physiological monitoring and ocular tracking: 3D calibration in a fully-immersive virtual health care environment.

    PubMed

    Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo

    2010-01-01

    Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.

  7. Simulators and virtual reality in surgical education.

    PubMed

    Chou, Betty; Handa, Victoria L

    2006-06-01

    This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.

  8. ME science as mobile learning based on virtual reality

    NASA Astrophysics Data System (ADS)

    Fradika, H. D.; Surjono, H. D.

    2018-04-01

    The purpose of this article described about ME Science (Mobile Education Science) as mobile learning application learning of Fisika Inti. ME Science is a product of research and development (R&D) that was using Alessi and Trollip model. Alessi and Trollip model consists three stages that are: (a) planning include analysis of problems, goals, need, and idea of development product, (b) designing includes collecting of materials, designing of material content, creating of story board, evaluating and review product, (c) developing includes development of product, alpha testing, revision of product, validation of product, beta testing, and evaluation of product. The article describes ME Science only to development of product which include development stages. The result of development product has been generates mobile learning application based on virtual reality that can be run on android-based smartphone. These application consist a brief description of learning material, quizzes, video of material summery, and learning material based on virtual reality.

  9. Implementation of Augmented Reality Technology in Sangiran Museum with Vuforia

    NASA Astrophysics Data System (ADS)

    Purnomo, F. A.; Santosa, P. I.; Hartanto, R.; Pratisto, E. H.; Purbayu, A.

    2018-03-01

    Archaeological object is an evidence of life on ancient relics which has a lifespan of millions years ago. The discovery of this ancient object by the Museum Sangiran then is preserved and protected from potential damage. This research will develop Augmented Reality application for the museum that display a virtual information from ancient object on display. The content includes information as text, audio, and animation of 3D model as a representation of the ancient object. This study emphasizes the 3D Markerless recognition process by using Vuforia Augmented Reality (AR) system so that visitor can access the exhibition objects through different viewpoints. Based on the test result, by registering image target with 25o angle interval, 3D markerless keypoint feature can be detected with different viewpoint. The device must meet minimal specifications of Dual Core 1.2 GHz processor, GPU Power VR SG5X, 8 MP auto focus camera and 1 GB of memory to run the application. The average success of the AR application detects object in museum exhibition to 3D Markerless with a single view by 40%, Markerless multiview by 86% (for angle 0° - 180°) and 100% (for angle 0° - 360°). Application detection distance is between 23 cm and up to 540 cm with the response time to detect 3D Markerless has 12 seconds in average.

  10. Virtual Environment Computer Simulations to Support Human Factors Engineering and Operations Analysis for the RLV Program

    NASA Technical Reports Server (NTRS)

    Lunsford, Myrtis Leigh

    1998-01-01

    The Army-NASA Virtual Innovations Laboratory (ANVIL) was recently created to provide virtual reality tools for performing Human Engineering and operations analysis for both NASA and the Army. The author's summer research project consisted of developing and refining these tools for NASA's Reusable Launch Vehicle (RLV) program. Several general simulations were developed for use by the ANVIL for the evaluation of the X34 Engine Changeout procedure. These simulations were developed with the software tool dVISE 4.0.0 produced by Division Inc. All software was run on an SGI Indigo2 High Impact. This paper describes the simulations, various problems encountered with the simulations, other summer activities, and possible work for the future. We first begin with a brief description of virtual reality systems.

  11. Will Anything Useful Come Out of Virtual Reality? Examination of a Naval Application

    DTIC Science & Technology

    1993-05-01

    The term virtual reality can encompass varying meanings, but some generally accepted attributes of a virtual environment are that it is immersive...technology, but at present there are few practical applications which are utilizing the broad range of virtual reality technology. This paper will discuss an...Operability, operator functions, Virtual reality , Man-machine interface, Decision aids/decision making, Decision support. ASW.

  12. A 3D virtual reality ophthalmoscopy trainer.

    PubMed

    Wilson, Andrew S; O'Connor, Jake; Taylor, Lewis; Carruthers, David

    2017-12-01

    Performing eye examinations is an important clinical skill that medical students often find difficult to become proficient in. This paper describes the development and evaluation of an innovative 3D virtual reality (VR) training application to support learning these skills. The VR ophthalmoscope was developed by a clinical team and technologist using the unity game engine, smartphone and virtual reality headset. It has a series of tasks that include performing systematic eye examinations, identifying common eye pathologies and a knowledge quiz. As part of their clinical training, 15 fourth-year medical students were surveyed for their views on this teaching approach. The Technology Acceptance Model was used to evaluate perceived usefulness and ease of use. Data were also collected on the usability of the app, together with the students' written comments about it. Users agreed that the teaching approach improved their understanding of ophthalmoscopy (n = 14), their ability to identify landmarks in the eye (n = 14) and their ability to recognise abnormalities (n = 15). They found the app easy to use (n = 15), the teaching approach informative (n = 13) and that it would increase students' confidence when performing these tasks in future (n = 15). Performing eye examinations is an important clinical skill DISCUSSION: The evaluation showed that a VR app can successfully simulate the processes involved in performing eye examinations. The app was highly rated for all elements of perceived usefulness, ease of use and usability. Medical students stated that they would like to be taught other medical skills in this way in future. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  13. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  14. Effectiveness of the Virtual Reality System Toyra on Upper Limb Function in People with Tetraplegia: A Pilot Randomized Clinical Trial.

    PubMed

    Dimbwadyo-Terrer, I; Gil-Agudo, A; Segura-Fragoso, A; de los Reyes-Guzmán, A; Trincado-Alonso, F; Piazza, S; Polonio-López, B

    2016-01-01

    The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra(®) virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η (2) = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35.

  15. Effectiveness of the Virtual Reality System Toyra on Upper Limb Function in People with Tetraplegia: A Pilot Randomized Clinical Trial

    PubMed Central

    Dimbwadyo-Terrer, I.; Gil-Agudo, A.; Segura-Fragoso, A.; de los Reyes-Guzmán, A.; Trincado-Alonso, F.; Piazza, S.; Polonio-López, B.

    2016-01-01

    The aim of this study was to investigate the effects of a virtual reality program combined with conventional therapy in upper limb function in people with tetraplegia and to provide data about patients' satisfaction with the virtual reality system. Thirty-one people with subacute complete cervical tetraplegia participated in the study. Experimental group received 15 sessions with Toyra® virtual reality system for 5 weeks, 30 minutes/day, 3 days/week in addition to conventional therapy, while control group only received conventional therapy. All patients were assessed at baseline, after intervention, and at three-month follow-up with a battery of clinical, functional, and satisfaction scales. Control group showed significant improvements in the manual muscle test (p = 0,043, partial η 2 = 0,22) in the follow-up evaluation. Both groups demonstrated clinical, but nonsignificant, changes to their arm function in 4 of the 5 scales used. All patients showed a high level of satisfaction with the virtual reality system. This study showed that virtual reality added to conventional therapy produces similar results in upper limb function compared to only conventional therapy. Moreover, the gaming aspects incorporated in conventional rehabilitation appear to produce high motivation during execution of the assigned tasks. This trial is registered with EudraCT number 2015-002157-35. PMID:26885511

  16. Cognitive training on stroke patients via virtual reality-based serious games.

    PubMed

    Gamito, Pedro; Oliveira, Jorge; Coelho, Carla; Morais, Diogo; Lopes, Paulo; Pacheco, José; Brito, Rodrigo; Soares, Fabio; Santos, Nuno; Barata, Ana Filipa

    2017-02-01

    Use of virtual reality environments in cognitive rehabilitation offers cost benefits and other advantages. In order to test the effectiveness of a virtual reality application for neuropsychological rehabilitation, a cognitive training program using virtual reality was applied to stroke patients. A virtual reality-based serious games application for cognitive training was developed, with attention and memory tasks consisting of daily life activities. Twenty stroke patients were randomly assigned to two conditions: exposure to the intervention, and waiting list control. The results showed significant improvements in attention and memory functions in the intervention group, but not in the controls. Overall findings provide further support for the use of VR cognitive training applications in neuropsychological rehabilitation. Implications for Rehabilitation Improvements in memory and attention functions following a virtual reality-based serious games intervention. Training of daily-life activities using a virtual reality application. Accessibility to training contents.

  17. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  18. The Potential of Using Virtual Reality Technology in Physical Activity Settings

    ERIC Educational Resources Information Center

    Pasco, Denis

    2013-01-01

    In recent years, virtual reality technology has been successfully used for learning purposes. The purposes of the article are to examine current research on the role of virtual reality in physical activity settings and discuss potential application of using virtual reality technology to enhance learning in physical education. The article starts…

  19. Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey

    ERIC Educational Resources Information Center

    Yellowlees, Peter M.; Cook, James N.

    2006-01-01

    Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…

  20. Virtual reality measures in neuropsychological assessment: a meta-analytic review.

    PubMed

    Neguț, Alexandra; Matu, Silviu-Andrei; Sava, Florin Alin; David, Daniel

    2016-02-01

    Virtual reality-based assessment is a new paradigm for neuropsychological evaluation, that might provide an ecological assessment, compared to paper-and-pencil or computerized neuropsychological assessment. Previous research has focused on the use of virtual reality in neuropsychological assessment, but no meta-analysis focused on the sensitivity of virtual reality-based measures of cognitive processes in measuring cognitive processes in various populations. We found eighteen studies that compared the cognitive performance between clinical and healthy controls on virtual reality measures. Based on a random effects model, the results indicated a large effect size in favor of healthy controls (g = .95). For executive functions, memory and visuospatial analysis, subgroup analysis revealed moderate to large effect sizes, with superior performance in the case of healthy controls. Participants' mean age, type of clinical condition, type of exploration within virtual reality environments, and the presence of distractors were significant moderators. Our findings support the sensitivity of virtual reality-based measures in detecting cognitive impairment. They highlight the possibility of using virtual reality measures for neuropsychological assessment in research applications, as well as in clinical practice.

  1. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  2. Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery

    NASA Astrophysics Data System (ADS)

    Halik, Łukasz; Smaczyński, Maciej

    2017-12-01

    The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.

  3. Use of Virtual Reality Tools for Vestibular Disorders Rehabilitation: A Comprehensive Analysis.

    PubMed

    Bergeron, Mathieu; Lortie, Catherine L; Guitton, Matthieu J

    2015-01-01

    Classical peripheral vestibular disorders rehabilitation is a long and costly process. While virtual reality settings have been repeatedly suggested to represent possible tools to help the rehabilitation process, no systematic study had been conducted so far. We systematically reviewed the current literature to analyze the published protocols documenting the use of virtual reality settings for peripheral vestibular disorders rehabilitation. There is an important diversity of settings and protocols involving virtual reality settings for the treatment of this pathology. Evaluation of the symptoms is often not standardized. However, our results unveil a clear effect of virtual reality settings-based rehabilitation of the patients' symptoms, assessed by objectives tools such as the DHI (mean decrease of 27 points), changing symptoms handicap perception from moderate to mild impact on life. Furthermore, we detected a relationship between the duration of the exposure to virtual reality environments and the magnitude of the therapeutic effects, suggesting that virtual reality treatments should last at least 150 minutes of cumulated exposure to ensure positive outcomes. Virtual reality offers a pleasant and safe environment for the patient. Future studies should standardize evaluation tools, document putative side effects further, compare virtual reality to conventional physical therapy, and evaluate economical costs/benefits of such strategies.

  4. Use of Virtual Reality Tools for Vestibular Disorders Rehabilitation: A Comprehensive Analysis

    PubMed Central

    Bergeron, Mathieu; Lortie, Catherine L.; Guitton, Matthieu J.

    2015-01-01

    Classical peripheral vestibular disorders rehabilitation is a long and costly process. While virtual reality settings have been repeatedly suggested to represent possible tools to help the rehabilitation process, no systematic study had been conducted so far. We systematically reviewed the current literature to analyze the published protocols documenting the use of virtual reality settings for peripheral vestibular disorders rehabilitation. There is an important diversity of settings and protocols involving virtual reality settings for the treatment of this pathology. Evaluation of the symptoms is often not standardized. However, our results unveil a clear effect of virtual reality settings-based rehabilitation of the patients' symptoms, assessed by objectives tools such as the DHI (mean decrease of 27 points), changing symptoms handicap perception from moderate to mild impact on life. Furthermore, we detected a relationship between the duration of the exposure to virtual reality environments and the magnitude of the therapeutic effects, suggesting that virtual reality treatments should last at least 150 minutes of cumulated exposure to ensure positive outcomes. Virtual reality offers a pleasant and safe environment for the patient. Future studies should standardize evaluation tools, document putative side effects further, compare virtual reality to conventional physical therapy, and evaluate economical costs/benefits of such strategies. PMID:26556560

  5. Virtual reality training improves balance function.

    PubMed

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-09-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function.

  6. Virtual reality training improves balance function

    PubMed Central

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-01-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function. PMID:25368651

  7. Training software using virtual-reality technology and pre-calculated effective dose data.

    PubMed

    Ding, Aiping; Zhang, Di; Xu, X George

    2009-05-01

    This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.

  8. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  9. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    PubMed

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Virtual reality for improving balance in patients after stroke: A systematic review and meta-analysis.

    PubMed

    Li, Zhen; Han, Xiu-Guo; Sheng, Jing; Ma, Shao-Jun

    2016-05-01

    To evaluate the effectiveness of virtual reality interventions for improving balance in people after stroke. Systematic review and meta-analysis of randomized controlled trials. Studies were obtained by searching the following databases: MEDLINE, CINAHL, EMBASE, Web of Science and CENTRAL. Two reviewers assessed studies for inclusion, extracted data and assessed trial quality. Sixteen studies involving 428 participants were included. People who received virtual reality interventions showed marked improvements in Berg Balance Scale (mean difference: 1.46, 95% confidence interval: 0.09-2.83, P<0.05, I²=0%) and Timed Up and Go Test (mean difference: -1.62, 95% confidence interval: -3.07- -0.16, P<0.05, I²=24%) compared with controls. This meta-analysis of randomized controlled trials supports the use of virtual reality to improve balance after stroke. © The Author(s) 2015.

  11. Data streaming in telepresence environments.

    PubMed

    Lamboray, Edouard; Würmlin, Stephan; Gross, Markus

    2005-01-01

    In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.

  12. Virtual Realities and the Future of Text.

    ERIC Educational Resources Information Center

    Marcus, Stephen

    1992-01-01

    Discusses issues surrounding virtual reality and "virtual books." Suggests that those who are exploring the territory of virtual realities are already helping to expand and enrich expectations and visions for integrating technology into reading and writing. (RS)

  13. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  14. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  15. Using Virtual Reality Environment to Improve Joint Attention Associated with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Cheng, Yufang; Huang, Ruowen

    2012-01-01

    The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or…

  16. Naval Applications of Virtual Reality,

    DTIC Science & Technology

    1993-01-01

    Expert Virtual Reality Special Report 󈨡, pp. 67- 72. 14. SUBJECT TERMS 15 NUMBER o0 PAGES man-machine interface virtual reality decision support...collective and individual performance. -" Virtual reality projects could help *y by Mark Gembicki Av-t-abilty CodesA Avafllat Idt Iofe and David Rousseau...alt- 67 VIRTUAL . REALITY SPECIAl, REPORT r-OPY avcriaikxb to DD)C qg .- 154,41X~~~~~~~~~~~~j 1411 iI..:41 T a].’ 1,1 4 1111 I 4 1 * .11 ~ 4 l.~w111511 I

  17. Immersive and interactive virtual reality to improve learning and retention of neuroanatomy in medical students: a randomized controlled study.

    PubMed

    Ekstrand, Chelsea; Jamal, Ali; Nguyen, Ron; Kudryk, Annalise; Mann, Jennifer; Mendez, Ivar

    2018-02-23

    Spatial 3-dimensional understanding of the brain is essential to learning neuroanatomy, and 3-dimensional learning techniques have been proposed as tools to enhance neuroanatomy training. The aim of this study was to examine the impact of immersive virtual-reality neuroanatomy training and compare it to traditional paper-based methods. In this randomized controlled study, participants consisted of first- or second-year medical students from the University of Saskatchewan recruited via email and posters displayed throughout the medical school. Participants were randomly assigned to the virtual-reality group or the paper-based group and studied the spatial relations between neural structures for 12 minutes after performing a neuroanatomy baseline test, with both test and control questions. A postintervention test was administered immediately after the study period and 5-9 days later. Satisfaction measures were obtained. Of the 66 participants randomly assigned to the study groups, 64 were included in the final analysis, 31 in the virtual-reality group and 33 in the paper-based group. The 2 groups performed comparably on the baseline questions and showed significant performance improvement on the test questions following study. There were no significant differences between groups for the control questions, the postintervention test questions or the 7-day postintervention test questions. Satisfaction survey results indicated that neurophobia was decreased. Results from this study provide evidence that training in neuroanatomy in an immersive and interactive virtual-reality environment may be an effective neuroanatomy learning tool that warrants further study. They also suggest that integration of virtual-reality into neuroanatomy training may improve knowledge retention, increase study motivation and decrease neurophobia. Copyright 2018, Joule Inc. or its licensors.

  18. Immersive and interactive virtual reality to improve learning and retention of neuroanatomy in medical students: a randomized controlled study

    PubMed Central

    Ekstrand, Chelsea; Jamal, Ali; Nguyen, Ron; Kudryk, Annalise; Mann, Jennifer; Mendez, Ivar

    2018-01-01

    Background: Spatial 3-dimensional understanding of the brain is essential to learning neuroanatomy, and 3-dimensional learning techniques have been proposed as tools to enhance neuroanatomy training. The aim of this study was to examine the impact of immersive virtual-reality neuroanatomy training and compare it to traditional paper-based methods. Methods: In this randomized controlled study, participants consisted of first- or second-year medical students from the University of Saskatchewan recruited via email and posters displayed throughout the medical school. Participants were randomly assigned to the virtual-reality group or the paper-based group and studied the spatial relations between neural structures for 12 minutes after performing a neuroanatomy baseline test, with both test and control questions. A postintervention test was administered immediately after the study period and 5-9 days later. Satisfaction measures were obtained. Results: Of the 66 participants randomly assigned to the study groups, 64 were included in the final analysis, 31 in the virtual-reality group and 33 in the paper-based group. The 2 groups performed comparably on the baseline questions and showed significant performance improvement on the test questions following study. There were no significant differences between groups for the control questions, the postintervention test questions or the 7-day postintervention test questions. Satisfaction survey results indicated that neurophobia was decreased. Interpretation: Results from this study provide evidence that training in neuroanatomy in an immersive and interactive virtual-reality environment may be an effective neuroanatomy learning tool that warrants further study. They also suggest that integration of virtual-reality into neuroanatomy training may improve knowledge retention, increase study motivation and decrease neurophobia. PMID:29510979

  19. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  20. Virtual reality training for improving the skills needed for performing surgery of the ear, nose or throat.

    PubMed

    Piromchai, Patorn; Avery, Alex; Laopaiboon, Malinee; Kennedy, Gregor; O'Leary, Stephen

    2015-09-09

    Virtual reality simulation uses computer-generated imagery to present a simulated training environment for learners. This review seeks to examine whether there is evidence to support the introduction of virtual reality surgical simulation into ear, nose and throat surgical training programmes. 1. To assess whether surgeons undertaking virtual reality simulation-based training achieve surgical ('patient') outcomes that are at least as good as, or better than, those achieved through conventional training methods.2. To assess whether there is evidence from either the operating theatre, or from controlled (simulation centre-based) environments, that virtual reality-based surgical training leads to surgical skills that are comparable to, or better than, those achieved through conventional training. The Cochrane Ear, Nose and Throat Disorders Group (CENTDG) Trials Search Co-ordinator searched the CENTDG Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 6); PubMed; EMBASE; ERIC; CINAHL; Web of Science; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 27 July 2015. We included all randomised controlled trials and controlled trials comparing virtual reality training and any other method of training in ear, nose or throat surgery. We used the standard methodological procedures expected by The Cochrane Collaboration. We evaluated both technical and non-technical aspects of skill competency. We included nine studies involving 210 participants. Out of these, four studies (involving 61 residents) assessed technical skills in the operating theatre (primary outcomes). Five studies (comprising 149 residents and medical students) assessed technical skills in controlled environments (secondary outcomes). The majority of the trials were at high risk of bias. We assessed the GRADE quality of evidence for most outcomes across studies as 'low'. Operating theatre environment (primary outcomes) In the operating theatre, there were no studies that examined two of three primary outcomes: real world patient outcomes and acquisition of non-technical skills. The third primary outcome (technical skills in the operating theatre) was evaluated in two studies comparing virtual reality endoscopic sinus surgery training with conventional training. In one study, psychomotor skill (which relates to operative technique or the physical co-ordination associated with instrument handling) was assessed on a 10-point scale. A second study evaluated the procedural outcome of time-on-task. The virtual reality group performance was significantly better, with a better psychomotor score (mean difference (MD) 1.66, 95% CI 0.52 to 2.81; 10-point scale) and a shorter time taken to complete the operation (MD -5.50 minutes, 95% CI -9.97 to -1.03). Controlled training environments (secondary outcomes) In a controlled environment five studies evaluated the technical skills of surgical trainees (one study) and medical students (three studies). One study was excluded from the analysis. Surgical trainees: One study (80 participants) evaluated the technical performance of surgical trainees during temporal bone surgery, where the outcome was the quality of the final dissection. There was no difference in the end-product scores between virtual reality and cadaveric temporal bone training. Medical students: Two other studies (40 participants) evaluated technical skills achieved by medical students in the temporal bone laboratory. Learners' knowledge of the flow of the operative procedure (procedural score) was better after virtual reality than conventional training (SMD 1.11, 95% CI 0.44 to 1.79). There was also a significant difference in end-product score between the virtual reality and conventional training groups (SMD 2.60, 95% CI 1.71 to 3.49). One study (17 participants) revealed that medical students acquired anatomical knowledge (on a scale of 0 to 10) better during virtual reality than during conventional training (MD 4.3, 95% CI 2.05 to 6.55). No studies in a controlled training environment assessed non-technical skills. There is limited evidence to support the inclusion of virtual reality surgical simulation into surgical training programmes, on the basis that it can allow trainees to develop technical skills that are at least as good as those achieved through conventional training. Further investigations are required to determine whether virtual reality training is associated with better real world outcomes for patients and the development of non-technical skills. Virtual reality simulation may be considered as an additional learning tool for medical students.

  1. Virtual Reality: An Overview.

    ERIC Educational Resources Information Center

    Franchi, Jorge

    1994-01-01

    Highlights of this overview of virtual reality include optics; interface devices; virtual worlds; potential applications, including medicine and archaeology; problems, including costs; current research and development; future possibilities; and a listing of vendors and suppliers of virtual reality products. (Contains 11 references.) (LRW)

  2. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  3. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  4. Psychological benefits of virtual reality for patients in rehabilitation therapy.

    PubMed

    Chen, Chih-Hung; Jeng, Ming-Chang; Fung, Chin-Ping; Doong, Ji-Liang; Chuang, Tien-Yow

    2009-05-01

    Whether virtual rehabilitation is beneficial has not been determined. To investigate the psychological benefits of virtual reality in rehabilitation. An experimental group underwent therapy with a virtual-reality-based exercise bike, and a control group underwent the therapy without virtual-reality equipment. Hospital laboratory. 30 patients suffering from spinal-cord injury. A designed rehabilitation therapy. Endurance, Borg's rating-of-perceived-exertion scale, the Activation-Deactivation Adjective Check List (AD-ACL), and the Simulator Sickness Questionnaire. The differences between the experimental and control groups were significant for AD-ACL calmness and tension. A virtual-reality-based rehabilitation program can ease patients' tension and induce calm.

  5. Augmented Reality Guidance for the Resection of Missing Colorectal Liver Metastases: An Initial Experience.

    PubMed

    Ntourakis, Dimitrios; Memeo, Ricardo; Soler, Luc; Marescaux, Jacques; Mutter, Didier; Pessaux, Patrick

    2016-02-01

    Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM.

  6. STS-61 crew utilizing Virtual Reality in training for HST repair mission

    NASA Image and Video Library

    1993-06-11

    Astronaut Jeffrey A. Hoffman, one of four crewmembers for STS-61 that will conduct scheduled spacewalks during the flight, wears a special helmet and gloves designed to assist in proper positioning near the telescope while on the end of the robot arm. Crewmembers are utilizing a new virtual reality training aid which assists in refining positioning patterns for Space Shuttle Endeavour's Remote Manipulator System (RMS) (36890); Astronaut Claude Nicollier looks at a computer display of the Shuttle's robot arm movements as Thomas D. Akers and Kathryn C. Thornton, mission specialists look on. Nicollier will be responsible for maneuvering the astronauts while they stand in a foot restraint on the end of the RMS arm (36891,36894); Hoffman wears a special helmet and gloves designed to assist in proper positioning near the telescope while on the end of the robot arm (35892); Nicollier looks at a computer display of the Shuttle's robot arm movements as Akers looks on (36893); While (l-r) Astronauts Kenneth Bowersox, Kathryn Thornton, Richard O. Covey and Thomas D. Akers watch, Nicollier moves the Robot arm to desired locations in the Shuttle's payload bay using the Virtual Reality program (36895); Bowersox takes his turn maneuvering the RMS while mission specialist Hoffman, wearing the Virtual Reality helmet, follows his own progress on the end of the robot arm. Crewmembers participating during the training session are (l-r) Astronauts Akers, Hoffman, Bowersox, Nicollier, Covey, and Thornton. In the background, David Homan, an engineer in the JSC Engineering Directorate's Automation and Robotics Division, looks on (36896).

  7. A Study on the Effect of Virtual Reality 3D Exploratory Education on Students' Creativity and Leadership

    ERIC Educational Resources Information Center

    Lin, Mike Tz-Yauw; Wang, Jau-Shyong; Kuo, Hui-Ming; Luo, Yuzhou

    2017-01-01

    Applying education ideas and concepts to education sites, through good educational policies, to complete education tasks is important for developing personal potential. Centered on students, educational objective is to cultivate multiple talents for the future society. In such a rapidly changing era, limited knowledge is not enough to cope with…

  8. Collateral Damage

    DTIC Science & Technology

    1978-10-01

    stated, they are, in reality , indexed to a single aspect of the weapon phenomena, e.g., damage levels for airblast-sensitive objects are indexed to...of Burst Propagation Related Airblast Representation Target Altitude Wheather (snow/rain) Terrain Temperature Air Pressure Around Structures 1. d) 3-38...with opaque material. Simply closing a shutter can be quite effective in virtually eliminating all possibility of interior fire starts from a single

  9. The Effect of 3D Virtual Reality on Sequential Time Perception among Deaf and Hard-of-Hearing Children

    ERIC Educational Resources Information Center

    Eden, Sigal

    2008-01-01

    Over the years deaf and hard-of-hearing children have been reported as having difficulty with time conception and, in particular, the proper arrangement of events in a logical, temporal order. The research examined whether deaf and hard-of-hearing children perceive a temporal sequence differently under different representational modes. We compared…

  10. Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David

    2009-01-01

    Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…

  11. Application of Virtual and Augmented reality to geoscientific teaching and research.

    NASA Astrophysics Data System (ADS)

    Hodgetts, David

    2017-04-01

    The geological sciences are the ideal candidate for the application of Virtual Reality (VR) and Augmented Reality (AR). Digital data collection techniques such as laser scanning, digital photogrammetry and the increasing use of Unmanned Aerial Vehicles (UAV) or Small Unmanned Aircraft (SUA) technology allow us to collect large datasets efficiently and evermore affordably. This linked with the recent resurgence in VR and AR technologies make these 3D digital datasets even more valuable. These advances in VR and AR have been further supported by rapid improvements in graphics card technologies, and by development of high performance software applications to support them. Visualising data in VR is more complex than normal 3D rendering, consideration needs to be given to latency, frame-rate and the comfort of the viewer to enable reasonably long immersion time. Each frame has to be rendered from 2 viewpoints (one for each eye) requiring twice the rendering than for normal monoscopic views. Any unnatural effects (e.g. incorrect lighting) can lead to an uncomfortable VR experience so these have to be minimised. With large digital outcrop datasets comprising 10's-100's of millions of triangles this is challenging but achievable. Apart from the obvious "wow factor" of VR there are some serious applications. It is often the case that users of digital outcrop data do not appreciate the size of features they are dealing with. This is not the case when using correctly scaled VR, and a true sense of scale can be achieved. In addition VR provides an excellent way of performing quality control on 3D models and interpretations and errors are much more easily visible. VR models can then be used to create content that can then be used in AR applications closing the loop and taking interpretations back into the field.

  12. Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis.

    PubMed

    Opriş, David; Pintea, Sebastian; García-Palacios, Azucena; Botella, Cristina; Szamosközi, Ştefan; David, Daniel

    2012-02-01

    Virtual reality exposure therapy (VRET) is a promising intervention for the treatment of the anxiety disorders. The main objective of this meta-analysis is to compare the efficacy of VRET, used in a behavioral or cognitive-behavioral framework, with that of the classical evidence-based treatments, in anxiety disorders. A comprehensive search of the literature identified 23 studies (n = 608) that were included in the final analysis. The results show that in the case of anxiety disorders, (1) VRET does far better than the waitlist control; (2) the post-treatment results show similar efficacy between the behavioral and the cognitive behavioral interventions incorporating a virtual reality exposure component and the classical evidence-based interventions, with no virtual reality exposure component; (3) VRET has a powerful real-life impact, similar to that of the classical evidence-based treatments; (4) VRET has a good stability of results over time, similar to that of the classical evidence-based treatments; (5) there is a dose-response relationship for VRET; and (6) there is no difference in the dropout rate between the virtual reality exposure and the in vivo exposure. Implications are discussed. © 2011 Wiley Periodicals, Inc.

  13. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  14. A methodological, task-based approach to Procedure-Specific Simulations training.

    PubMed

    Setty, Yaki; Salzman, Oren

    2016-12-01

    Procedure-Specific Simulations (PSS) are 3D realistic simulations that provide a platform to practice complete surgical procedures in a virtual-reality environment. While PSS have the potential to improve surgeons' proficiency, there are no existing standards or guidelines for PSS development in a structured manner. We employ a unique platform inspired by game design to develop virtual reality simulations in three dimensions of urethrovesical anastomosis during radical prostatectomy. 3D visualization is supported by a stereo vision, providing a fully realistic view of the simulation. The software can be executed for any robotic surgery platform. Specifically, we tested the simulation under windows environment on the RobotiX Mentor. Using urethrovesical anastomosis during radical prostatectomy simulation as a representative example, we present a task-based methodological approach to PSS training. The methodology provides tasks in increasing levels of difficulty from a novice level of basic anatomy identification, to an expert level that permits testing new surgical approaches. The modular methodology presented here can be easily extended to support more complex tasks. We foresee this methodology as a tool used to integrate PSS as a complementary training process for surgical procedures.

  15. Training wheelchair navigation in immersive virtual environments for patients with spinal cord injury - end-user input to design an effective system.

    PubMed

    Nunnerley, Joanne; Gupta, Swati; Snell, Deborah; King, Marcus

    2017-05-01

    A user-centred design was used to develop and test the feasibility of an immersive 3D virtual reality wheelchair training tool for people with spinal cord injury (SCI). A Wheelchair Training System was designed and modelled using the Oculus Rift headset and a Dynamic Control wheelchair joystick. The system was tested by clinicians and expert wheelchair users with SCI. Data from focus groups and individual interviews were analysed using a general inductive approach to thematic analysis. Four themes emerged: Realistic System, which described the advantages of a realistic virtual environment; a Wheelchair Training System, which described participants' thoughts on the wheelchair training applications; Overcoming Resistance to Technology, the obstacles to introducing technology within the clinical setting; and Working outside the Rehabilitation Bubble which described the protective hospital environment. The Oculus Rift Wheelchair Training System has the potential to provide a virtual rehabilitation setting which could allow wheelchair users to learn valuable community wheelchair use in a safe environment. Nausea appears to be a side effect of the system, which will need to be resolved before this can be a viable clinical tool. Implications for Rehabilitation Immersive virtual reality shows promising benefit for wheelchair training in a rehabilitation setting. Early engagement with consumers can improve product development.

  16. Initial validation of a virtual-reality robotic simulator.

    PubMed

    Lendvay, Thomas S; Casale, Pasquale; Sweet, Robert; Peters, Craig

    2008-09-01

    Robotic surgery is an accepted adjunct to minimally invasive surgery, but training is restricted to console time. Virtual-reality (VR) simulation has been shown to be effective for laparoscopic training and so we seek to validate a novel VR robotic simulator. The American Urological Association (AUA) Office of Education approved this study. Subjects enrolled in a robotics training course at the 2007 AUA annual meeting underwent skills training in a da Vinci dry-lab module and a virtual-reality robotics module which included a three-dimensional (3D) VR robotic simulator. Demographic and acceptability data were obtained, and performance metrics from the simulator were compared between experienced and nonexperienced roboticists for a ring transfer task. Fifteen subjects-four with previous robotic surgery experience and 11 without-participated. Nine subjects were still in urology training and nearly half of the group had reported playing video games. Overall performance of the da Vinci system and the simulator were deemed acceptable by a Likert scale (0-6) rating of 5.23 versus 4.69, respectively. Experienced subjects outperformed nonexperienced subjects on the simulator on three metrics: total task time (96 s versus 159 s, P < 0.02), economy of motion (1,301 mm versus 2,095 mm, P < 0.04), and time the telemanipulators spent outside of the center of the platform's workspace (4 s versus 35 s, P < 0.02). This is the first demonstration of face and construct validity of a virtual-reality robotic simulator. Further studies assessing predictive validity are ultimately required to support incorporation of VR robotic simulation into training curricula.

  17. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  18. An artificial reality environment for remote factory control and monitoring

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.

  19. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  20. Virtual Reality for Pediatric Sedation: A Randomized Controlled Trial Using Simulation.

    PubMed

    Zaveri, Pavan P; Davis, Aisha B; O'Connell, Karen J; Willner, Emily; Aronson Schinasi, Dana A; Ottolini, Mary

    2016-02-09

    Team training for procedural sedation for pediatric residents has traditionally consisted of didactic presentations and simulated scenarios using high-fidelity mannequins. We assessed the effectiveness of a virtual reality module in teaching preparation for and management of sedation for procedures. After developing a virtual reality environment in Second Life® (Linden Lab, San Francisco, CA) where providers perform and recover patients from procedural sedation, we conducted a randomized controlled trial to assess the effectiveness of the virtual reality module versus a traditional web-based educational module. A 20 question pre- and post-test was administered to assess knowledge change. All subjects participated in a simulated pediatric procedural sedation scenario that was video recorded for review and assessed using a 32-point checklist. A brief survey elicited feedback on the virtual reality module and the simulation scenario. The median score on the assessment checklist was 75% for the intervention group and 70% for the control group (P = 0.32). For the knowledge tests, there was no statistically significant difference between the groups (P = 0.14). Users had excellent reviews of the virtual reality module and reported that the module added to their education. Pediatric residents performed similarly in simulation and on a knowledge test after a virtual reality module compared with a traditional web-based module on procedural sedation. Although users enjoyed the virtual reality experience, these results question the value virtual reality adds in improving the performance of trainees. Further inquiry is needed into how virtual reality provides true value in simulation-based education.

Top