Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
NASA Astrophysics Data System (ADS)
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-02-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected physical and virtual experiences has the potential to promote connections among ideas. This paper explores the effect of augmenting a virtual lab with physical controls on high school chemistry students' understanding of gas laws. We compared students using the augmented virtual lab to students using a similar sensor-based physical lab with teacher-led discussions. Results demonstrate that students in the augmented virtual lab condition made significant gains from pretest and posttest and outperformed traditional students on some but not all concepts. Results provide insight into incorporating mixed-reality technologies into authentic classroom settings.
Augmenting the Thermal Flux Experiment: A Mixed Reality Approach with the HoloLens
ERIC Educational Resources Information Center
Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.
2017-01-01
In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted…
Reality Check: Basics of Augmented, Virtual, and Mixed Reality.
Brigham, Tara J
2017-01-01
Augmented, virtual, and mixed reality applications all aim to enhance a user's current experience or reality. While variations of this technology are not new, within the last few years there has been a significant increase in the number of artificial reality devices or applications available to the general public. This column will explain the difference between augmented, virtual, and mixed reality and how each application might be useful in libraries. It will also provide an overview of the concerns surrounding these different reality applications and describe how and where they are currently being used.
NASA Astrophysics Data System (ADS)
Wozniak, Peter; Vauderwange, Oliver; Mandal, Avikarsha; Javahiraly, Nicolas; Curticapean, Dan
2016-09-01
Practical exercises are a crucial part of many curricula. Even simple exercises can improve the understanding of the underlying subject. Most experimental setups require special hardware. To carry out e. g. a lens experiments the students need access to an optical bench, various lenses, light sources, apertures and a screen. In our previous publication we demonstrated the use of augmented reality visualization techniques in order to let the students prepare with a simulated experimental setup. Within the context of our intended blended learning concept we want to utilize augmented or virtual reality techniques for stationary laboratory exercises. Unlike applications running on mobile devices, stationary setups can be extended more easily with additional interfaces and thus allow for more complex interactions and simulations in virtual reality (VR) and augmented reality (AR). The most significant difference is the possibility to allow interactions beyond touching a screen. The LEAP Motion controller is a small inexpensive device that allows for the tracking of the user's hands and fingers in three dimensions. It is conceivable to allow the user to interact with the simulation's virtual elements by the user's very hand position, movement and gesture. In this paper we evaluate possible applications of the LEAP Motion controller for simulated experiments in augmented and virtual reality. We pay particular attention to the devices strengths and weaknesses and want to point out useful and less useful application scenarios.
An Analysis of Engagement in a Combination Indoor/Outdoor Augmented Reality Educational Game
ERIC Educational Resources Information Center
Folkestad, James; O'Shea, Patrick
2011-01-01
This paper describes the results of a qualitative analysis of video captured during a dual indoor/outdoor Augmented Reality experience. Augmented Reality is the layering of virtual information on top of the physical world. This Augmented Reality experience asked students to interact with the San Diego Museum of Art and the Botanical Gardens in San…
Davidson, Dennisa; Evans, Lois
2018-03-01
To explore online study groups as augmentation tools in preparing for the Royal Australian and New Zealand College of Psychiatrists Observed Structured Clinical Examinations (OSCE) for fellowship. An online survey of New Zealand trainees was carried out to assess exam preparedness and openness to virtual study groups and results analysed. Relevant material around virtual study groups for fellowship examinations was reviewed and used to inform a pilot virtual study group. Four New Zealand trainees took part in the pilot project, looking at using a virtual platform to augment OSCE preparation. Of the 50 respondents 36% felt adequately prepared for the OSCE. Sixty-four per cent were interested in using a virtual platform to augment their study. Virtual study groups were noted to be especially important for rural trainees, none of whom felt able to form study groups for themselves. The pilot virtual study group was trialled successfully. All four trainees reported the experience as subjectively beneficial to their examination preparation. Virtual platforms hold promise as an augmentation strategy for exam preparation, especially for rural trainees who are more geographically isolated and less likely to have peers preparing for the same examinations.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
Transduction between worlds: using virtual and mixed reality for earth and planetary science
NASA Astrophysics Data System (ADS)
Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.
2017-12-01
Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.
The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning
ERIC Educational Resources Information Center
Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar
2017-01-01
Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…
AR Feels "Softer" than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality.
Gaffary, Yoren; Le Gouis, Benoit; Marchal, Maud; Argelaguet, Ferran; Arnaldi, Bruno; Lecuyer, Anatole
2017-11-01
Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel "softer" than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.
Frames of Reference in Mobile Augmented Reality Displays
ERIC Educational Resources Information Center
Mou, Weimin; Biocca, Frank; Owen, Charles B.; Tang, Arthur; Xiao, Fan; Lim, Lynette
2004-01-01
In 3 experiments, the authors investigated spatial updating in augmented reality environments. Participants learned locations of virtual objects on the physical floor. They were turned to appropriate facing directions while blindfolded before making pointing judgments (e.g., "Imagine you are facing X. Point to Y"). Experiments manipulated the…
a Framework for Low-Cost Multi-Platform VR and AR Site Experiences
NASA Astrophysics Data System (ADS)
Wallgrün, J. O.; Huang, J.; Zhao, J.; Masrur, A.; Oprean, D.; Klippel, A.
2017-11-01
Low-cost consumer-level immersive solutions have the potential to revolutionize education and research in many fields by providing virtual experiences of sites that are either inaccessible, too dangerous, or too expensive to visit, or by augmenting in-situ experiences using augmented and mixed reality methods. We present our approach for creating low-cost multi-platform virtual and augmented reality site experiences of real world places for education and research purposes, making extensive use of Structure-from-Motion methods as well as 360° photography and videography. We discuss several example projects, for the Mayan City of Cahal Pech, Iceland's Thrihnukar volcano, the Santa Marta informal settlement in Rio, and for the Penn State Campus, and we propose a framework for creating and maintaining such applications by combining declarative content specification methods with a central linked-data based spatio-temporal information system.
On Location Learning: Authentic Applied Science with Networked Augmented Realities
NASA Astrophysics Data System (ADS)
Rosenbaum, Eric; Klopfer, Eric; Perry, Judy
2007-02-01
The learning of science can be made more like the practice of science through authentic simulated experiences. We have created a networked handheld Augmented Reality environment that combines the authentic role-playing of Augmented Realities and the underlying models of Participatory Simulations. This game, known as Outbreak @ The Institute, is played across a university campus where players take on the roles of doctors, medical technicians, and public health experts to contain a disease outbreak. Players can interact with virtual characters and employ virtual diagnostic tests and medicines. They are challenged to identify the source and prevent the spread of an infectious disease that can spread among real and/or virtual characters according to an underlying model. In this paper, we report on data from three high school classes who played the game. We investigate students' perception of the authenticity of the game in terms of their personal embodiment in the game, their experience playing different roles, and their understanding of the dynamic model underlying the game.
An Augmented Virtuality Display for Improving UAV Usability
2005-01-01
cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and
Cabrilo, Ivan; Bijlenga, Philippe; Schaller, Karl
2014-09-01
Augmented reality technology has been used for intraoperative image guidance through the overlay of virtual images, from preoperative imaging studies, onto the real-world surgical field. Although setups based on augmented reality have been used for various neurosurgical pathologies, very few cases have been reported for the surgery of arteriovenous malformations (AVM). We present our experience with AVM surgery using a system designed for image injection of virtual images into the operating microscope's eyepiece, and discuss why augmented reality may be less appealing in this form of surgery. N = 5 patients underwent AVM resection assisted by augmented reality. Virtual three-dimensional models of patients' heads, skulls, AVM nidi, and feeder and drainage vessels were selectively segmented and injected into the microscope's eyepiece for intraoperative image guidance, and their usefulness was assessed in each case. Although the setup helped in performing tailored craniotomies, in guiding dissection and in localizing drainage veins, it did not provide the surgeon with useful information concerning feeder arteries, due to the complexity of AVM angioarchitecture. The difficulty in intraoperatively conveying useful information on feeder vessels may make augmented reality a less engaging tool in this form of surgery, and might explain its underrepresentation in the literature. Integrating an AVM's hemodynamic characteristics into the augmented rendering could make it more suited to AVM surgery.
Embodied information behavior, mixed reality and big data
NASA Astrophysics Data System (ADS)
West, Ruth; Parola, Max J.; Jaycen, Amelia R.; Lueg, Christopher P.
2015-03-01
A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.
Augmenting the thermal flux experiment: A mixed reality approach with the HoloLens
NASA Astrophysics Data System (ADS)
Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.
2017-09-01
In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted displays, which allow one to embed virtual objects into the real surroundings, leading to a Mixed Reality (MR) experience. In such an environment, digital and real objects do not only coexist, but moreover are also able to interact with each other in real time. These concepts can be used to merge human perception of reality with digitally visualized sensor data, thereby making the invisible visible. As a first example, in this paper we introduce alongside the basic idea of this column an MR experiment in thermodynamics for a laboratory course for freshman students in physics or other science and engineering subjects that uses physical data from mobile devices for analyzing and displaying physical phenomena to students.
Hybrid Reality Lab Capabilities - Video 2
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2016-01-01
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.
ERIC Educational Resources Information Center
Montoya, Mauricio Hincapié; Díaz, Christian Andrés; Moreno, Gustavo Adolfo
2017-01-01
Nowadays, the use of technology to improve teaching and learning experiences in the classroom has been promoted. One of these technologies is augmented reality, which allows overlaying layers of virtual information on real scene with the aim of increasing the perception that user has of reality. Augmented reality has proved to offer several…
Virtually-augmented interfaces for tactical aircraft.
Haas, M W
1995-05-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
The NASA Augmented/Virtual Reality Lab: The State of the Art at KSC
NASA Technical Reports Server (NTRS)
Little, William
2017-01-01
The NASA Augmented Virtual Reality (AVR) Lab at Kennedy Space Center is dedicated to the investigation of Augmented Reality (AR) and Virtual Reality (VR) technologies, with the goal of determining potential uses of these technologies as human-computer interaction (HCI) devices in an aerospace engineering context. Begun in 2012, the AVR Lab has concentrated on commercially available AR and VR devices that are gaining in popularity and use in a number of fields such as gaming, training, and telepresence. We are working with such devices as the Microsoft Kinect, the Oculus Rift, the Leap Motion, the HTC Vive, motion capture systems, and the Microsoft Hololens. The focus of our work has been on human interaction with the virtual environment, which in turn acts as a communications bridge to remote physical devices and environments which the operator cannot or should not control or experience directly. Particularly in reference to dealing with spacecraft and the oftentimes hazardous environments they inhabit, it is our hope that AR and VR technologies can be utilized to increase human safety and mission success by physically removing humans from those hazardous environments while virtually putting them right in the middle of those environments.
Mobile devices, Virtual Reality, Augmented Reality, and Digital Geoscience Education.
NASA Astrophysics Data System (ADS)
Crompton, H.; De Paor, D. G.; Whitmeyer, S. J.; Bentley, C.
2016-12-01
Mobile devices are playing an increasing role in geoscience education. Affordances include instructor-student communication and class management in large classrooms, virtual and augmented reality applications, digital mapping, and crowd-sourcing. Mobile technologies have spawned the sub field of mobile learning or m-learning, which is defined as learning across multiple contexts, through social and content interactions. Geoscientists have traditionally engaged in non-digital mobile learning via fieldwork, but digital devices are greatly extending the possibilities, especially for non-traditional students. Smartphones and tablets are the most common devices but smart glasses such as Pivothead enable live streaming of a first-person view (see for example, https://youtu.be/gWrDaYP5w58). Virtual reality headsets such as Google Cardboard create an immersive virtual field experience and digital imagery such as GigaPan and Structure from Motion enables instructors and/or students to create virtual specimens and outcrops that are sharable across the globe. Whereas virtual reality (VR) replaces the real world with a virtual representation, augmented reality (AR) overlays digital data on the live scene visible to the user in real time. We have previously reported on our use of the AR application called FreshAiR for geoscientific "egg hunts." The popularity of Pokémon Go demonstrates the potential of AR for mobile learning in the geosciences.
Twitter-Augmented Journal Club: Educational Engagement and Experience So Far.
Udani, Ankeet D; Moyse, Daniel; Peery, Charles Andrew; Taekman, Jeffrey M
2016-04-15
Social media is a nascent medical educational technology. The benefits of Twitter include (1) easy adoption; (2) access to experts, peers, and patients across the globe; (3) 24/7 connectivity; (4) creation of virtual, education-based communities using hashtags; and (5) crowdsourcing information using retweets. We report on a novel Twitter-augmented journal club for anesthesia residents: its design, implementation, and impact. Our inaugural anesthesia Twitter-augmented journal club succeeded in engaging the anesthesia community and increasing residents' professional use of Twitter. Notably, our experience suggests that anesthesia residents are willing to use social media for their education.
Baus, Oliver; Bouchard, Stéphane
2014-01-01
This paper reviews the move from virtual reality exposure-based therapy to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although, the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the twentieth century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed “safely” to the object(s) of their fear, without the costs associated with programing complete VEs. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper covers some less well-known aspects of the history of AR, raises some ARET related issues, and proposes potential avenues to be followed. These include the type of measures to be used to qualify the user’s experience in an augmented reality environment, the exclusion of certain AR-type functionalities from the definition of AR, as well as the potential use of ARET to treat non-small animal phobias, such as social phobia. PMID:24624073
Baus, Oliver; Bouchard, Stéphane
2014-01-01
This paper reviews the move from virtual reality exposure-based therapy to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although, the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the twentieth century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed "safely" to the object(s) of their fear, without the costs associated with programing complete VEs. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper covers some less well-known aspects of the history of AR, raises some ARET related issues, and proposes potential avenues to be followed. These include the type of measures to be used to qualify the user's experience in an augmented reality environment, the exclusion of certain AR-type functionalities from the definition of AR, as well as the potential use of ARET to treat non-small animal phobias, such as social phobia.
Berryman, Donna R
2012-01-01
Augmented reality is a technology that overlays digital information on objects or places in the real world for the purpose of enhancing the user experience. It is not virtual reality, that is, the technology that creates a totally digital or computer created environment. Augmented reality, with its ability to combine reality and digital information, is being studied and implemented in medicine, marketing, museums, fashion, and numerous other areas. This article presents an overview of augmented reality, discussing what it is, how it works, its current implementations, and its potential impact on libraries.
Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.
Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico
2017-01-01
Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.
Augmented Reality to Preserve Hidden Vestiges in Historical Cities. a Case Study
NASA Astrophysics Data System (ADS)
Martínez, J. L.; Álvareza, S.; Finat, J.; Delgado, F. J.; Finat, J.
2015-02-01
Mobile devices provide an increasingly sophisticated support to enhanced experiences and understanding the remote past in an interactive way. The use of augmented reality technologies allows to develop mobile applications for indoor exploration of virtually reconstructed archaeological places. In our work we have built a virtual reconstruction of a Roman Villa with data arising from an urgent partial excavation which were performed in order to build a car parking in the historical city of Valladolid (Spain). In its current state, the archaeological site is covered by an urban garden. Localization and tracking are performed using a combination of GPS and inertial sensors of the mobile device. In this work we prove how to perform an interactive navigation around the 3D virtual model showing an interpretation of the way it was. The user experience is enhanced by answering some simple questions, performing minor tasks and puzzles which are presented with multimedia contents linked to key features of the archaeological site.
ERIC Educational Resources Information Center
Civelek, Turhan; Ucar, Erdem; Ustunel, Hakan; Aydin, Mehmet Kemal
2014-01-01
The current research aims to explore the effects of a haptic augmented simulation on students' achievement and their attitudes towards Physics in an immersive virtual reality environment (VRE). A quasi-experimental post-test design was employed utilizing experiment and control groups. The participants were 215 students from a K-12 school in…
Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts
NASA Astrophysics Data System (ADS)
hong, Zhou; Wenhua, Lu
2017-01-01
Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.
ERIC Educational Resources Information Center
Taçgin, Zeynep; Arslan, Ahmet
2017-01-01
The purpose of this study is to determine perception of postgraduate Computer Education and Instructional Technologies (CEIT) students regarding the concepts of Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Augmented Virtuality (AV) and Mirror Reality; and to offer a table that includes differences and similarities between…
The Virtual Clinical Practicum: an innovative telehealth model for clinical nursing education.
Grady, Janet L
2011-01-01
The Virtual Clinical Practicum (VCP) involves a clinical nursing education delivery strategy that uses video teleconferencing technology to address time, distance, and resource barriers. Technology-delivered education can augment the existing curriculum by increasing student access to clinical experts in specialty areas, thus supporting efficient use of faculty resources. This article describes the implementation of the VCP process and student perceptions of its effectiveness and usefulness. The VCP was shown to be a successful method of clinical nursing education, offering students exposure to clinical situations not available by other means. Opportunities for dialogue, critical reflection, and synthesis allowed students to experience the benefits of a traditional experience, enhanced through technology and tailored to the specific needs of the students. Respondents overwhelmingly recommended further use of the VCP to augment existing clinical nursing education methods.
The Architectonic Experience of Body and Space in Augmented Interiors
Pasqualini, Isabella; Blefari, Maria Laura; Tadi, Tej; Serino, Andrea; Blanke, Olaf
2018-01-01
The environment shapes our experience of space in constant interaction with the body. Architectonic interiors amplify the perception of space through the bodily senses; an effect also known as embodiment. The interaction of the bodily senses with the space surrounding the body can be tested experimentally through the manipulation of multisensory stimulation and measured via a range of behaviors related to bodily self-consciousness. Many studies have used Virtual Reality to show that visuotactile conflicts mediated via a virtual body or avatar can disrupt the unified subjective experience of the body and self. In the full-body illusion paradigm, participants feel as if the avatar was their body (ownership, self-identification) and they shift their center of awareness toward the position of the avatar (self-location). However, the influence of non-bodily spatial cues around the body on embodiment remains unclear, and data about the impact of architectonic space on human perception and self-conscious states are sparse. We placed participants into a Virtual Reality arena, where large and narrow virtual interiors were displayed with and without an avatar. We then applied synchronous or asynchronous visuotactile strokes to the back of the participants and avatar, or, to the front wall of the void interiors. During conditions of illusory self-identification with the avatar, participants reported sensations of containment, drift, and touch with the architectonic environment. The absence of the avatar suppressed such feelings, yet, in the large space, we found an effect of continuity between the physical and the virtual interior depending on the full-body illusion. We discuss subjective feelings evoked by architecture and compare the full-body illusion in augmented interiors to architectonic embodiment. A relevant outcome of this study is the potential to dissociate the egocentric, first-person view from the physical point of view through augmented architectonic space. PMID:29755378
The Architectonic Experience of Body and Space in Augmented Interiors.
Pasqualini, Isabella; Blefari, Maria Laura; Tadi, Tej; Serino, Andrea; Blanke, Olaf
2018-01-01
The environment shapes our experience of space in constant interaction with the body. Architectonic interiors amplify the perception of space through the bodily senses; an effect also known as embodiment. The interaction of the bodily senses with the space surrounding the body can be tested experimentally through the manipulation of multisensory stimulation and measured via a range of behaviors related to bodily self-consciousness. Many studies have used Virtual Reality to show that visuotactile conflicts mediated via a virtual body or avatar can disrupt the unified subjective experience of the body and self. In the full-body illusion paradigm, participants feel as if the avatar was their body (ownership, self-identification) and they shift their center of awareness toward the position of the avatar (self-location). However, the influence of non-bodily spatial cues around the body on embodiment remains unclear, and data about the impact of architectonic space on human perception and self-conscious states are sparse. We placed participants into a Virtual Reality arena, where large and narrow virtual interiors were displayed with and without an avatar. We then applied synchronous or asynchronous visuotactile strokes to the back of the participants and avatar, or, to the front wall of the void interiors. During conditions of illusory self-identification with the avatar, participants reported sensations of containment, drift, and touch with the architectonic environment. The absence of the avatar suppressed such feelings, yet, in the large space, we found an effect of continuity between the physical and the virtual interior depending on the full-body illusion. We discuss subjective feelings evoked by architecture and compare the full-body illusion in augmented interiors to architectonic embodiment. A relevant outcome of this study is the potential to dissociate the egocentric, first-person view from the physical point of view through augmented architectonic space.
Riva, Giuseppe; Baños, Rosa M.; Botella, Cristina; Mantovani, Fabrizia; Gaggioli, Andrea
2016-01-01
During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies – augmented reality (AR) and virtual reality (VR) – exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual’s worldview. PMID:27746747
Riva, Giuseppe; Baños, Rosa M; Botella, Cristina; Mantovani, Fabrizia; Gaggioli, Andrea
2016-01-01
During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies - augmented reality (AR) and virtual reality (VR) - exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual's worldview.
Methods and systems relating to an augmented virtuality environment
Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J
2014-05-20
Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.
ERIC Educational Resources Information Center
Duncan, Mike R.; Birrell, Bob; Williams, Toni
2005-01-01
Virtual Reality (VR) is primarily a visual technology. Elements such as haptics (touch feedback) and sound can augment an experience, but the visual cues are the prime driver of what an audience will experience from a VR presentation. At its inception in 2001 the Centre for Advanced Visualization (CFAV) at Niagara College of Arts and Technology…
ERIC Educational Resources Information Center
Chang, Hsin-Yi
2017-01-01
Two investigations were conducted in this study. In the first experiment, the effects of two types of interactivity with a computer simulation were compared: experimentation versus observation interactivity. Experimentation interactivity allows students to use simulations to conduct virtual experiments, whereas observation interactivity allows…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, Birchard P; Michel, Kelly D; Few, Douglas A
From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
Virtual and Augmented Reality Systems for Renal Interventions: A Systematic Review.
Detmer, Felicitas J; Hettig, Julian; Schindele, Daniel; Schostak, Martin; Hansen, Christian
2017-01-01
Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J.; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-01-01
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the “integrated image” on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. PMID:28198442
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-02-15
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications.
NASA Astrophysics Data System (ADS)
De Mauro, Alessandro; Ardanza, Aitor; Monge, Esther; Molina Rueda, Francisco
2013-03-01
Several studies have shown that both virtual and augmented reality are technologies suitable for rehabilitation therapy due to the inherent ability of simulating real daily life activities while improving patient motivation. In this paper we will first present the state of the art in the use of virtual and augmented reality applications for rehabilitation of motor disorders and second we will focus on the analysis of the results of our project. In particular, requirements of patients with cerebrovascular accidents, spinal cord injuries and cerebral palsy to the use of virtual and augmented reality systems will be detailed.
A novel augmented reality system of image projection for image-guided neurosurgery.
Mahvash, Mehran; Besharati Tabrizi, Leila
2013-05-01
Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.
Telemedicine with mobile devices and augmented reality for early postoperative care.
Ponce, Brent A; Brabston, Eugene W; Shin Zu; Watson, Shawna L; Baker, Dustin; Winn, Dennis; Guthrie, Barton L; Shenai, Mahesh B
2016-08-01
Advanced features are being added to telemedicine paradigms to enhance usability and usefulness. Virtual Interactive Presence (VIP) is a technology that allows a surgeon and patient to interact in a "merged reality" space, to facilitate both verbal, visual, and manual interaction. In this clinical study, a mobile VIP iOS application was introduced into routine post-operative orthopedic and neurosurgical care. Survey responses endorse the usefulness of this tool, as it relates to The virtual interaction provides needed virtual follow-up in instances where in-person follow-up may be limited, and enhances the subjective patient experience.
Focus, locus, and sensus: the three dimensions of virtual experience.
Waterworth, E L; Waterworth, J A
2001-04-01
A model of virtual/physical experience is presented, which provides a three dimensional conceptual space for virtual and augmented reality (VR and AR) comprising the dimensions of focus, locus, and sensus. Focus is most closely related to what is generally termed presence in the VR literature. When in a virtual environment, presence is typically shared between the VR and the physical world. "Breaks in presence" are actually shifts of presence away from the VR and toward the external environment. But we can also have "breaks in presence" when attention moves toward absence--when an observer is not attending to stimuli present in the virtual environment, nor to stimuli present in the surrounding physical environment--when the observer is present in neither the virtual nor the physical world. We thus have two dimensions of presence: focus of attention (between presence and absence) and the locus of attention (the virtual vs. the physical world). A third dimension is the sensus of attention--the level of arousal determining whether the observer is highly conscious or relatively unconscious while interacting with the environment. After expanding on each of these three dimensions of experience in relation to VR, we present a couple of educational examples as illustrations, and also relate our model to a suggested spectrum of evaluation methods for virtual environments.
Perform light and optic experiments in Augmented Reality
NASA Astrophysics Data System (ADS)
Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan; Javahiraly, Nicolas; Israel, Kai
2015-10-01
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university's laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based. Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one's perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content. The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
Augmented Reality: A Brand New Challenge for the Assessment and Treatment of Psychological Disorders
Chicchi Giglioli, Irene Alice; Pedroli, Elisa
2015-01-01
Augmented Reality is a new technological system that allows introducing virtual contents in the real world in order to run in the same representation and, in real time, enhancing the user's sensory perception of reality. From another point of view, Augmented Reality can be defined as a set of techniques and tools that add information to the physical reality. To date, Augmented Reality has been used in many fields, such as medicine, entertainment, maintenance, architecture, education, and cognitive and motor rehabilitation but very few studies and applications of AR exist in clinical psychology. In the treatment of psychological disorders, Augmented Reality has given preliminary evidence to be a useful tool due to its adaptability to the patient needs and therapeutic purposes and interactivity. Another relevant factor is the quality of the user's experience in the Augmented Reality system determined from emotional engagement and sense of presence. This experience could increase the AR ecological validity in the treatment of psychological disorders. This paper reviews the recent studies on the use of Augmented Reality in the evaluation and treatment of psychological disorders, focusing on current uses of this technology and on the specific features that delineate Augmented Reality a new technique useful for psychology. PMID:26339283
Chicchi Giglioli, Irene Alice; Pallavicini, Federica; Pedroli, Elisa; Serino, Silvia; Riva, Giuseppe
2015-01-01
Augmented Reality is a new technological system that allows introducing virtual contents in the real world in order to run in the same representation and, in real time, enhancing the user's sensory perception of reality. From another point of view, Augmented Reality can be defined as a set of techniques and tools that add information to the physical reality. To date, Augmented Reality has been used in many fields, such as medicine, entertainment, maintenance, architecture, education, and cognitive and motor rehabilitation but very few studies and applications of AR exist in clinical psychology. In the treatment of psychological disorders, Augmented Reality has given preliminary evidence to be a useful tool due to its adaptability to the patient needs and therapeutic purposes and interactivity. Another relevant factor is the quality of the user's experience in the Augmented Reality system determined from emotional engagement and sense of presence. This experience could increase the AR ecological validity in the treatment of psychological disorders. This paper reviews the recent studies on the use of Augmented Reality in the evaluation and treatment of psychological disorders, focusing on current uses of this technology and on the specific features that delineate Augmented Reality a new technique useful for psychology.
ERIC Educational Resources Information Center
Chang, Hsin-Yi; Wu, Hsin-Kai; Hsu, Ying-Shao
2013-01-01
virtual objects or information overlaying physical objects or environments, resulting in a mixed reality in which virtual objects and real environments coexist in a meaningful way to augment learning…
Mixed Reality with HoloLens: Where Virtual Reality Meets Augmented Reality in the Operating Room.
Tepper, Oren M; Rudy, Hayeem L; Lefkowitz, Aaron; Weimer, Katie A; Marks, Shelby M; Stern, Carrie S; Garfein, Evan S
2017-11-01
Virtual reality and augmented reality devices have recently been described in the surgical literature. The authors have previously explored various iterations of these devices, and although they show promise, it has become clear that virtual reality and/or augmented reality devices alone do not adequately meet the demands of surgeons. The solution may lie in a hybrid technology known as mixed reality, which merges many virtual reality and augmented realty features. Microsoft's HoloLens, the first commercially available mixed reality device, provides surgeons intraoperative hands-free access to complex data, the real environment, and bidirectional communication. This report describes the use of HoloLens in the operating room to improve decision-making and surgical workflow. The pace of mixed reality-related technological development will undoubtedly be rapid in the coming years, and plastic surgeons are ideally suited to both lead and benefit from this advance.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
Virtual reality and planetary exploration
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1992-01-01
NASA-Ames is intensively developing virtual-reality (VR) capabilities that can extend and augment computer-generated and remote spatial environments. VR is envisioned not only as a basis for improving human/machine interactions involved in planetary exploration, but also as a medium for the more widespread sharing of the experience of exploration, thereby broadening the support-base for the lunar and planetary-exploration endeavors. Imagery representative of Mars are being gathered for VR presentation at such terrestrial sites as Antarctica and Death Valley.
Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime
2018-05-02
This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.
Encarnação, L Miguel; Bimber, Oliver
2002-01-01
Collaborative virtual environments for diagnosis and treatment planning are increasingly gaining importance in our global society. Virtual and Augmented Reality approaches promised to provide valuable means for the involved interactive data analysis, but the underlying technologies still create a cumbersome work environment that is inadequate for clinical employment. This paper addresses two of the shortcomings of such technology: Intuitive interaction with multi-dimensional data in immersive and semi-immersive environments as well as stereoscopic multi-user displays combining the advantages of Virtual and Augmented Reality technology.
Nifakos, Sokratis; Zary, Nabil
2014-01-01
The research community has called for the development of effective educational interventions for addressing prescription behaviour since antimicrobial resistance remains a global health issue. Examining the potential to displace the educational process from Personal Computers to Mobile devices, in this paper we investigated a new method of integration of Virtual Patients into Mobile devices with augmented reality technology, enriching the practitioner's education in prescription behavior. Moreover, we also explored which information are critical during the prescription behavior education and we visualized these information on real context with augmented reality technology, simultaneously with a running Virtual Patient's scenario. Following this process, we set the educational frame of experiential knowledge to a mixed (virtual and real) environment.
Perceptual Issues of Augmented and Virtual Environments
2007-07-01
distinct preference for one eye over the other. This is typically, quickly, and easily found through sighting tests (Peli, 1990). This eye dominance...been researched extensively and for different purposes. The entertainment industry has also experimented with synthetic smell production, in the...form of accompanying smells to enhance the experience of films (Lefcowitz, 2001, Somerson, 2001). In the Aroma Rama and the Smell -o-vision systems
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-01-01
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-05-05
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen
2002-02-01
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.
Augmented Virtual Reality: How to Improve Education Systems
ERIC Educational Resources Information Center
Fernandez, Manuel
2017-01-01
This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students' learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students' knowledge acquisition.…
ERIC Educational Resources Information Center
Brown, Abbie Howard
1999-01-01
Describes and discusses how simulation activities can be used in teacher education to augment the traditional field-experience approach, focusing on artificial intelligence, virtual reality, and intelligent tutoring systems. Includes an overview of simulation as a teaching and learning strategy and specific examples of high-technology simulations…
Ketelhut, Diane Jass; Niemi, Steven M
2007-01-01
This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed wi th respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the use ful specifications of augmented reality displays, an optical see-thro ugh display was used in an ATC Tower simulation. Three different binocular fields of view (14 deg, 28 deg, and 47 deg) were examined to det ermine their effect on subjects# ability to detect aircraft maneuveri ng and landing. The results suggest that binocular fields of view much greater than 47 deg are unlikely to dramatically improve search perf ormance and that partial binocular overlap is a feasible display tech nique for augmented reality Tower applications.
ERIC Educational Resources Information Center
Chen, Jingjing; Xu, Jianliang; Tang, Tao; Chen, Rongchao
2017-01-01
Interaction is critical for successful teaching and learning in a virtual learning environment (VLE). This paper presents a web-based interaction-aware VLE--WebIntera-classroom--which aims to augment learning interactions by increasing the learner-to-content and learner-to-instructor interactions. We design a ubiquitous interactive interface that…
ERIC Educational Resources Information Center
Orman, Evelyn K.; Price, Harry E.; Russell, Christine R.
2017-01-01
Acquiring nonverbal skills necessary to appropriately communicate and educate members of performing ensembles is essential for wind band conductors. Virtual reality learning environments (VRLEs) provide a unique setting for developing these proficiencies. For this feasibility study, we used an augmented immersive VRLE to enhance eye contact, torso…
Ntourakis, Dimitrios; Memeo, Ricardo; Soler, Luc; Marescaux, Jacques; Mutter, Didier; Pessaux, Patrick
2016-02-01
Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM.
Ingress in Geography: Portals to Academic Success?
ERIC Educational Resources Information Center
Davis, Michael
2017-01-01
Niantic Labs has developed an augmented virtual reality mobile app game called Ingress in which agents must seek out and control locations for their designated factions. The app uses the Google Maps interface along with GPS to enhance a geocaching-like experience with elements of other classical games such as capture-the-flag. This study aims to…
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
Optical augmented reality assisted navigation system for neurosurgery teaching and planning
NASA Astrophysics Data System (ADS)
Wu, Hui-Qun; Geng, Xing-Yun; Wang, Li; Zhang, Yuan-Peng; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng
2013-07-01
This paper proposed a convenient navigation system for neurosurgeon's pre-operative planning and teaching with augmented reality (AR) technique, which maps the three-dimensional reconstructed virtual anatomy structures onto a skull model. This system included two parts, a virtual reality system and a skull model scence. In our experiment, a 73 year old right-handed man initially diagnosed with astrocytoma was selected as an example to vertify our system. His imaging data from different modalities were registered and the skull soft tissue, brain and inside vessels as well as tumor were reconstructed. Then the reconstructed models were overlayed on the real scence. Our findings showed that the reconstructed tissues were augmented into the real scence and the registration results were in good alignment. The reconstructed brain tissue was well distributed in the skull cavity. The probe was used by a neurosurgeon to explore the surgical pathway which could be directly posed into the tumor while not injuring important vessels. In this way, the learning cost for students and patients' education about surgical risks reduced. Therefore, this system could be a selective protocol for image guided surgery(IGS), and is promising for neurosurgeon's pre-operative planning and teaching.
Augmented virtuality for arthroscopic knee surgery.
Li, John M; Bardana, Davide D; Stewart, A James
2011-01-01
This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.
ERIC Educational Resources Information Center
Gavish, Nirit; Gutiérrez, Teresa; Webel, Sabine; Rodríguez, Jorge; Peveri, Matteo; Bockholt, Uli; Tecchia, Franco
2015-01-01
The current study evaluated the use of virtual reality (VR) and augmented reality (AR) platforms, developed within the scope of the SKILLS Integrated Project, for industrial maintenance and assembly (IMA) tasks training. VR and AR systems are now widely regarded as promising training platforms for complex and highly demanding IMA tasks. However,…
ERIC Educational Resources Information Center
Yang, Mau-Tsuen; Liao, Wan-Che
2014-01-01
The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…
An optical tracking system for virtual reality
NASA Astrophysics Data System (ADS)
Hrimech, Hamid; Merienne, Frederic
2009-03-01
In this paper we present a low-cost 3D tracking system which we have developed and tested in order to move away from traditional 2D interaction techniques (keyboard and mouse) in an attempt to improve user's experience while using a CVE. Such a tracking system is used to implement 3D interaction techniques that augment user experience, promote user's sense of transportation in the virtual world as well as user's awareness of their partners. The tracking system is a passive optical tracking system using stereoscopy a technique allowing the reconstruction of three-dimensional information from a couple of images. We have currently deployed our 3D tracking system on a collaborative research platform for investigating 3D interaction techniques in CVEs.
Distribution Locational Real-Time Pricing Based Smart Building Control and Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen
This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less
Helios: a tangible and augmented environment to learn optical phenomena in astronomy
NASA Astrophysics Data System (ADS)
Fleck, Stéphanie; Hachet, Martin
2015-10-01
France is among the few countries that have integrated astronomy in primary school levels. However, for fifteen years, a lot of studies have shown that children have difficulties in understanding elementary astronomic phenomena such as day/night alternation, seasons or moon phases' evolution. To understand these phenomena, learners have to mentally construct 3D perceptions of aster motions and to understand how light propagates from an allocentric point of view. Therefore, 4-5 grades children (8 to 11 years old), who are developing their spatial cognition, have many difficulties to assimilate geometric optical problems that are linked to astronomy. To make astronomical learning more efficient for young pupils, we have designed an Augmented Inquiry-Based Learning Environment (AIBLE): HELIOS. Because manipulations in astronomy are intrinsically not possible, we propose to manipulate the underlying model. With HELIOS, virtual replicas of the Sun, Moon and Earth are directly manipulated from tangible manipulations. This digital support combines the possibilities of Augmented Reality (AR) while maintaining intuitive interactions following the principles of didactic of sciences. Light properties are taken into account and shadows of Earth and Moon are directly produced by an omnidirectional light source associated to the virtual Sun. This AR environment provides users with experiences they would otherwise not be able to experiment in the physical world. Our main goal is that students can take active control of their learning, express and support their ideas, make predictions and hypotheses, and test them by conducting investigations.
NASA Astrophysics Data System (ADS)
Simonetto, E.; Froment, C.; Labergerie, E.; Ferré, G.; Séchet, B.; Chédorge, H.; Cali, J.; Polidori, L.
2013-07-01
Terrestrial Laser Scanning (TLS), 3-D modeling and its Web visualization are the three key steps needed to perform storage and grant-free and wide access to cultural heritage, as highlighted in many recent examples. The goal of this study is to set up 3-D Web resources for "virtually" visiting the exterior of the Abbaye de l'Epau, an old French abbey which has both a rich history and delicate architecture. The virtuality is considered in two ways: the flowing navigation in a virtual reality environment around the abbey and a game activity using augmented reality. First of all, the data acquisition consists in GPS and tacheometry survey, terrestrial laser scanning and photography acquisition. After data pre-processing, the meshed and textured 3-D model is generated using 3-D Reshaper commercial software. The virtual reality visit and augmented reality animation are then created using Unity software. This work shows the interest of such tools in bringing out the regional cultural heritage and making it attractive to the public.
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Media-Augmented Exercise Machines
NASA Astrophysics Data System (ADS)
Krueger, T.
2002-01-01
Cardio-vascular exercise has been used to mitigate the muscle and cardiac atrophy associated with adaptation to micro-gravity environments. Several hours per day may be required. In confined spaces and long duration missions this kind of exercise is inevitably repetitive and rapidly becomes uninteresting. At the same time, there are pressures to accomplish as much as possible given the cost- per-hour for humans occupying orbiting or interplanetary. Media augmentation provides a the means to overlap activities in time by supplementing the exercise with social, recreational, training or collaborative activities and thereby reducing time pressures. In addition, the machine functions as an interface to a wide range of digital environments allowing for spatial variety in an otherwise confined environment. We hypothesize that the adoption of media augmented exercise machines will have a positive effect on psycho-social well-being on long duration missions. By organizing and supplementing exercise machines, data acquisition hardware, computers and displays into an interacting system this proposal increases functionality with limited additional mass. This paper reviews preliminary work on a project to augment exercise equipment in a manner that addresses these issues and at the same time opens possibilities for additional benefits. A testbed augmented exercise machine uses a specialty built cycle trainer as both input to a virtual environment and as an output device from it using spatialized sound, and visual displays, vibration transducers and variable resistance. The resulting interactivity increases a sense of engagement in the exercise, provides a rich experience of the digital environments. Activities in the virtual environment and accompanying physiological and psychological indicators may be correlated to track and evaluate the health of the crew.
NASA Astrophysics Data System (ADS)
Starodubtsev, Illya
2017-09-01
The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
Augmented kinematic feedback from haptic virtual reality for dental skill acquisition.
Suebnukarn, Siriwan; Haddawy, Peter; Rhienmora, Phattanapon; Jittimanee, Pannapa; Viratket, Piyanuch
2010-12-01
We have developed a haptic virtual reality system for dental skill training. In this study we examined several kinds of kinematic information about the movement provided by the system supplement knowledge of results (KR) in dental skill acquisition. The kinematic variables examined involved force utilization (F) and mirror view (M). This created three experimental conditions that received augmented kinematic feedback (F, M, FM) and one control condition that did not (KR-only). Thirty-two dental students were randomly assigned to four groups. Their task was to perform access opening on the upper first molar with the haptic virtual reality system. An acquisition session consisted of two days of ten trials of practice in which augmented kinematic feedback was provided for the appropriate experimental conditions after each trial. One week after, a retention test consisting of two trials without augmented feedback was completed. The results showed that the augmented kinematic feedback groups had larger mean performance scores than the KR-only group in Day 1 of the acquisition and retention sessions (ANOVA, p<0.05). The apparent differences among feedback groups were not significant in Day 2 of the acquisition session (ANOVA, p>0.05). The trends in acquisition and retention sessions suggest that the augmented kinematic feedback can enhance the performance earlier in the skill acquisition and retention sessions.
Corrêa, Ana Grasielle Dionísio; de Assis, Gilda Aparecida; do Nascimento, Marilena; de Deus Lopes, Roseli
2017-04-01
Augmented Reality musical software (GenVirtual) is a technology, which primarily allows users to develop music activities for rehabilitation. This study aimed to analyse the perceptions of health care professionals regarding the clinical utility of GenVirtual. A second objective was to identify improvements to GenVirtual software and similar technologies. Music therapists, occupational therapists, physiotherapists and speech and language therapist who assist people with physical and cognitive disabilities were enrolled in three focus groups. The quantitative and qualitative data were collected through inductive thematic analysis. Three main themes were identified: the use of GenVirtual in health care areas; opportunities for realistic application of GenVirtual; and limitations in the use of GenVirtual. The registration units identified were: motor stimulation, cognitive stimulation, verbal learning, recreation activity, musicality, accessibility, motivation, sonic accuracy, interference of lighting, poor sound, children and adults. This research suggested that the GenVirtual is a complementary tool to conventional clinical practice and has great potential to motor and cognitive rehabilitation of children and adults. Implications for Rehabilitation Gaining health professional' perceptions of the Augmented Reality musical game (GenVirtual) give valuable information as to the clinical utility of the software. GenVirtual was perceived as a tool that could be used as enhancing the motor and cognitive rehabilitation process. GenVirtual was viewed as a tool that could enhance clinical practice and communication among various agencies, but it was suggested that it should be used with caution to avoid confusion and replacement of important services.
The virtual mirror: a new interaction paradigm for augmented reality environments.
Bichlmeier, Christoph; Heining, Sandro Michael; Feuerstein, Marco; Navab, Nassir
2009-09-01
Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.
Augmented Reality for the Improvement of Remote Laboratories: An Augmented Remote Laboratory
ERIC Educational Resources Information Center
Andujar, J. M.; Mejias, A.; Marquez, M. A.
2011-01-01
Augmented reality (AR) provides huge opportunities for online teaching in science and engineering, as these disciplines place emphasis on practical training and unsuited to completely nonclassroom training. This paper proposes a new concept in virtual and remote laboratories: the augmented remote laboratory (ARL). ARL is being tested in the first…
Virtual reality and planetary exploration
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1992-01-01
Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.
Virtual reality and planetary exploration
NASA Astrophysics Data System (ADS)
McGreevy, Michael W.
Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.
Kiryu, Tohru; So, Richard H Y
2007-09-25
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.
Kiryu, Tohru; So, Richard HY
2007-01-01
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution. PMID:17894857
Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes
NASA Astrophysics Data System (ADS)
Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.
2016-12-01
We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.
Augmenting the access grid using augmented reality
NASA Astrophysics Data System (ADS)
Li, Ying
2012-01-01
The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.
Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S
2015-08-01
We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barrile, V.; Bilotta, G.; Meduri, G. M.; De Carlo, D.; Nunnari, A.
2017-11-01
In this study, using technologies such as laser scanner and GPR it was desired to see their potential in the cultural heritage. Also with regard to the processing part we are compared the results obtained by the various commercial software and algorithms developed and implemented in Matlab. Moreover, Virtual Reality and Augmented Reality allow integrating the real world with historical-artistic information, laser scanners and georadar (GPR) data and virtual objects, virtually enriching it with multimedia elements, graphic and textual information accessible through smartphones and tablets.
NASA Astrophysics Data System (ADS)
Ribeiro, Allan; Santos, Helen
With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.
Ben-Moussa, Maher; Rubo, Marius; Debracque, Coralie; Lange, Wolf-Gero
2017-01-01
The present paper explores the benefits and the capabilities of various emerging state-of-the-art interactive 3D and Internet of Things technologies and investigates how these technologies can be exploited to develop a more effective technology supported exposure therapy solution for social anxiety disorder. "DJINNI" is a conceptual design of an in vivo augmented reality (AR) exposure therapy mobile support system that exploits several capturing technologies and integrates the patient's state and situation by vision-based, audio-based, and physiology-based analysis as well as by indoor/outdoor localization techniques. DJINNI also comprises an innovative virtual reality exposure therapy system that is adaptive and customizable to the demands of the in vivo experience and therapeutic progress. DJINNI follows a gamification approach where rewards and achievements are utilized to motivate the patient to progress in her/his treatment. The current paper reviews the state of the art of technologies needed for such a solution and recommends how these technologies could be integrated in the development of an individually tailored and yet feasible and effective AR/virtual reality-based exposure therapy. Finally, the paper outlines how DJINNI could be part of classical cognitive behavioral treatment and how to validate such a setup.
Ben-Moussa, Maher; Rubo, Marius; Debracque, Coralie; Lange, Wolf-Gero
2017-01-01
The present paper explores the benefits and the capabilities of various emerging state-of-the-art interactive 3D and Internet of Things technologies and investigates how these technologies can be exploited to develop a more effective technology supported exposure therapy solution for social anxiety disorder. “DJINNI” is a conceptual design of an in vivo augmented reality (AR) exposure therapy mobile support system that exploits several capturing technologies and integrates the patient’s state and situation by vision-based, audio-based, and physiology-based analysis as well as by indoor/outdoor localization techniques. DJINNI also comprises an innovative virtual reality exposure therapy system that is adaptive and customizable to the demands of the in vivo experience and therapeutic progress. DJINNI follows a gamification approach where rewards and achievements are utilized to motivate the patient to progress in her/his treatment. The current paper reviews the state of the art of technologies needed for such a solution and recommends how these technologies could be integrated in the development of an individually tailored and yet feasible and effective AR/virtual reality-based exposure therapy. Finally, the paper outlines how DJINNI could be part of classical cognitive behavioral treatment and how to validate such a setup. PMID:28503155
Immersive realities: articulating the shift from VR to mobile AR through artistic practice
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy; Berry, Rodney; DeFanti, Thomas A.
2012-03-01
Our contemporary imaginings of technological engagement with digital environments has transitioned from flying through Virtual Reality to mobile interactions with the physical world through personal media devices. Experiences technologically mediated through social interactivity within physical environments are now being preferenced over isolated environments such as CAVEs or HMDs. Examples of this trend can be seen in early tele-collaborative artworks which strove to use advanced networking to join multiple participants in shared virtual environments. Recent developments in mobile AR allow untethered access to such shared realities in places far removed from labs and home entertainment environments, and without the bulky and expensive technologies attached to our bodies that accompany most VR. This paper addresses the emerging trend favoring socially immersive artworks via mobile Augmented Reality rather than sensorially immersive Virtual Reality installations. With particular focus on AR as a mobile, locative technology, we will discuss how concepts of immersion and interactivity are evolving with this new medium. Immersion in context of mobile AR can be redefined to describe socially interactive experiences. Having distinctly different sensory, spatial and situational properties, mobile AR offers a new form for remixing elements from traditional virtual reality with physically based social experiences. This type of immersion offers a wide array of potential for mobile AR art forms. We are beginning to see examples of how artists can use mobile AR to create social immersive and interactive experiences.
Custom Titanium Ridge Augmentation Matrix (CTRAM): A Case Report.
Connors, Christopher A; Liacouras, Peter C; Grant, Gerald T
2016-01-01
This is a case report of a custom titanium ridge augmentation matrix (CTRAM). Using cone beam computed tomography (CBCT), a custom titanium space-maintaining device was developed. Alveolar ridges were virtually augmented, a matrix was virtually designed, and the CTRAM was additively manufactured with titanium (Ti6Al4V). Two cases are presented that resulted in sufficient increased horizontal bone volume with successful dental implant placement. The CTRAM design allows for preoperative planning for increasing alveolar ridge dimensions to support dental implants, reduces surgical time, and prevents the need for a second surgical site to gain sufficient alveolar ridge bone volume for dental implant therapy.
Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)
2017-01-01
created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11
ARSC: Augmented Reality Student Card--An Augmented Reality Solution for the Education Field
ERIC Educational Resources Information Center
El Sayed, Neven A. M.; Zayed, Hala H.; Sharawy, Mohamed I.
2011-01-01
Augmented Reality (AR) is the technology of adding virtual objects to real scenes through enabling the addition of missing information in real life. As the lack of resources is a problem that can be solved through AR, this paper presents and explains the usage of AR technology we introduce Augmented Reality Student Card (ARSC) as an application of…
von Segesser, Ludwig Karl; Berdajs, Denis; Abdel-Sayed, Saad; Tozzi, Piergiorgio; Ferrari, Enrico; Maisano, Francesco
2016-01-01
Inadequate venous drainage during minimally invasive cardiac surgery becomes most evident when the blood trapped in the pulmonary circulation floods the surgical field. The present study was designed to assess the in vivo performance of new, thinner, virtually wall-less, venous cannulas designed for augmented venous drainage in comparison to traditional thin-wall cannulas. Remote cannulation was realized in 5 bovine experiments (74.0 ± 2.4 kg) with percutaneous venous access over the wire, serial dilation up to 18 F and insertion of either traditional 19 F thin wall, wire-wound cannulas, or through the same access channel, new, thinner, virtually wall-less, braided cannulas designed for augmented venous drainage. A standard minimal extracorporeal circuit set with a centrifugal pump and a hollow fiber membrane oxygenator, but no in-line reservoir was used. One hundred fifty pairs of pump-flow and required pump inlet pressure values were recorded with calibrated pressure transducers and a flowmeter calibrated by a volumetric tank and timer at increasing pump speed from 1500 RPM to 3500 RPM (500-RPM increments). Pump flow accounted for 1.73 ± 0.85 l/min for wall-less versus 1.17 ± 0.45 l/min for thin wall at 1500 RPM, 3.91 ± 0.86 versus 3.23 ± 0.66 at 2500 RPM, 5.82 ± 1.05 versus 4.96 ± 0.81 at 3500 RPM. Pump inlet pressure accounted for 9.6 ± 9.7 mm Hg versus 4.2 ± 18.8 mm Hg for 1500 RPM, -42.4 ± 26.7 versus -123 ± 51.1 at 2500 RPM, and -126.7 ± 55.3 versus -313 ± 116.7 for 3500 RPM. At the well-accepted pump inlet pressure of -80 mm Hg, the new, thinner, virtually wall-less, braided cannulas provide unmatched venous drainage in vivo. Early clinical analyses have confirmed these findings.
Cranial implant design using augmented reality immersive system.
Ai, Zhuming; Evenhouse, Ray; Leigh, Jason; Charbel, Fady; Rasmussen, Mary
2007-01-01
Software tools that utilize haptics for sculpting precise fitting cranial implants are utilized in an augmented reality immersive system to create a virtual working environment for the modelers. The virtual environment is designed to mimic the traditional working environment as closely as possible, providing more functionality for the users. The implant design process uses patient CT data of a defective area. This volumetric data is displayed in an implant modeling tele-immersive augmented reality system where the modeler can build a patient specific implant that precisely fits the defect. To mimic the traditional sculpting workspace, the implant modeling augmented reality system includes stereo vision, viewer centered perspective, sense of touch, and collaboration. To achieve optimized performance, this system includes a dual-processor PC, fast volume rendering with three-dimensional texture mapping, the fast haptic rendering algorithm, and a multi-threading architecture. The system replaces the expensive and time consuming traditional sculpting steps such as physical sculpting, mold making, and defect stereolithography. This augmented reality system is part of a comprehensive tele-immersive system that includes a conference-room-sized system for tele-immersive small group consultation and an inexpensive, easily deployable networked desktop virtual reality system for surgical consultation, evaluation and collaboration. This system has been used to design patient-specific cranial implants with precise fit.
Virtual reality and hallucination: a technoetic perspective
NASA Astrophysics Data System (ADS)
Slattery, Diana R.
2008-02-01
Virtual Reality (VR), especially in a technologically focused discourse, is defined by a class of hardware and software, among them head-mounted displays (HMDs), navigation and pointing devices; and stereoscopic imaging. This presentation examines the experiential aspect of VR. Putting "virtual" in front of "reality" modifies the ontological status of a class of experience-that of "reality." Reality has also been modified [by artists, new media theorists, technologists and philosophers] as augmented, mixed, simulated, artificial, layered, and enhanced. Modifications of reality are closely tied to modifications of perception. Media theorist Roy Ascott creates a model of three "VR's": Verifiable Reality, Virtual Reality, and Vegetal (entheogenically induced) Reality. The ways in which we shift our perceptual assumptions, create and verify illusions, and enter "the willing suspension of disbelief" that allows us entry into imaginal worlds is central to the experience of VR worlds, whether those worlds are explicitly representational (robotic manipulations by VR) or explicitly imaginal (VR artistic creations). The early rhetoric surrounding VR was interwoven with psychedelics, a perception amplified by Timothy Leary's presence on the historic SIGGRAPH panel, and the Wall Street Journal's tag of VR as "electronic LSD." This paper discusses the connections-philosophical, social-historical, and psychological-perceptual between these two domains.
Perspectives future space on robotics
NASA Technical Reports Server (NTRS)
Lavery, Dave
1994-01-01
Last year's flight of the German ROTEX robot flight experiment heralded the start of a new era for space robotics. ROTEX is the first of at least 10 new robotic systems and experiments that will fly before 2000. These robots will augment astronaut on-orbit capabilities and extend virtual human presence to lunar and planetary surfaces. The robotic systems to be flown in the next five years fall into three categories: extravehicular robotic (EVR) servicers, science payload servicers, and planetary surface rovers. A description of the work on these systems is presented.
AMI: Augmented Michelson Interferometer
NASA Astrophysics Data System (ADS)
Furió, David; Hachet, Martin; Guillet, Jean-Paul; Bousquet, Bruno; Fleck, Stéphanie; Reuter, Patrick; Canioni, Lionel
2015-10-01
Experiments in optics are essential for learning and understanding physical phenomena. The problem with these experiments is that they are generally time consuming for both their construction and their maintenance, potentially dangerous through the use of laser sources, and often expensive due to high technology optical components. We propose to simulate such experiments by way of hybrid systems that exploit both spatial augmented reality and tangible interaction. In particular, we focus on one of the most popular optical experiments: the Michelson interferometer. In our approach, we target a highly interactive system where students are able to interact in real time with the Augmented Michelson Interferometer (AMI) to observe, test hypotheses and then to enhance their comprehension. Compared to a fully digital simulation, we are investigating an approach that benefits from both physical and virtual elements, and where the students experiment by manipulating 3D-printed physical replicas of optical components (e.g. lenses and mirrors). Our objective is twofold. First, we want to ensure that the students will learn with our simulator the same concepts and skills that they learn with traditional methods. Second, we hypothesis that such a system opens new opportunities to teach optics in a way that was not possible before, by manipulating concepts beyond the limits of observable physical phenomena. To reach this goal, we have built a complementary team composed of experts in the field of optics, human-computer interaction, computer graphics, sensors and actuators, and education science.
NASA Astrophysics Data System (ADS)
Strzys, M. P.; Kapp, S.; Thees, M.; Klein, P.; Lukowicz, P.; Knierim, P.; Schmidt, A.; Kuhn, J.
2018-05-01
Fundamental concepts of thermodynamics rely on abstract physical quantities such as energy, heat and entropy, which play an important role in the process of interpreting thermal phenomena and statistical mechanics. However, these quantities are not covered by human visual perception, and since heat sensation is purely qualitative and easy to deceive, an intuitive understanding often is lacking. Today immersive technologies like head-mounted displays of the newest generation, especially HoloLens, allow for high-quality augmented reality learning experiences, which can overcome this gap in human perception by presenting different representations of otherwise invisible quantities directly in the field of view of the user on the experimental apparatus, which simultaneously avoids a split-attention effect. In a mixed reality (MR) scenario as presented in this paper—which we call a holo.lab—human perception can be extended to the thermal regime by presenting false-color representations of the temperature of objects as a virtual augmentation directly on the real object itself in real-time. Direct feedback to experimental actions of the users in the form of different representations allows for immediate comparison to theoretical principles and predictions and therefore is supposed to intensify the theory–experiment interactions and to increase students’ conceptual understanding. We tested this technology for an experiment on thermal conduction of metals in the framework of undergraduate laboratories. A pilot study with treatment and control groups (N = 59) showed a small positive effect of MR on students’ performance measured with a standardized concept test for thermodynamics, pointing to an improvement of the understanding of the underlying physical concepts. These findings indicate that complex experiments could benefit even more from augmentation. This motivates us to enrich further experiments with MR.
A spatially augmented reality sketching interface for architectural daylighting design.
Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara
2011-01-01
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society
Explore and experience: mobile augmented reality for medical training.
Albrecht, Urs-Vito; Noll, Christoph; von Jan, Ute
2013-01-01
In medicine, especially in basic education, it may sometimes be inappropriate to integrate real patients into classes due to ethical issues that must be avoided. Nevertheless, the quality of medical education may suffer without the use of real cases. This is especially true of medical specialties such as legal medicine: survivors of a crime are already subjected to procedures that constitute a severe emotional burden and may cause additional distress even without the added presence of students. Using augmented reality based applications may alleviate this ethical dilemma by giving students the possibility to practice the necessary skills based on virtual but nevertheless almost realistic cases. The app "mARble®" that is presented in this paper follows this approach. The currently available learning module for legal medicine gives users an opportunity to learn about various wound patterns by virtually overlaying them on their own skin and is applicable in different learning settings. Preliminary evaluation results covering learning efficiency and emotional components of the learning process are promising. Content modules for other medical specialtiesare currently under construction.
Virtual reality, augmented reality…I call it i-Reality.
Grossmann, Rafael J
2015-01-01
The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients.
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
A brief review of augmented reality science learning
NASA Astrophysics Data System (ADS)
Gopalan, Valarmathie; Bakar, Juliana Aida Abu; Zulkifli, Abdul Nasir
2017-10-01
This paper reviews several literatures concerning the theories and model that could be applied for science motivation for upper secondary school learners (16-17 years old) in order to make the learning experience more amazing and useful. The embedment of AR in science could bring an awe-inspiring transformation on learners' viewpoint towards the respective subject matters. Augmented Reality is able to present the real and virtual learning experience with the addition of multiple media without replacing the real environment. Due to the unique feature of AR, it attracts the mass attention of researchers to implement AR in science learning. This impressive technology offers learners with the ultimate visualization and provides an astonishing and transparent learning experience by bringing to light the unseen perspective of the learning content. This paper will attract the attention of researchers in the related field as well as academicians in the related discipline. This paper aims to propose several related theoretical guidance that could be applied in science motivation to transform the learning in an effective way.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
Virtual and augmented reality in the treatment of phantom limb pain: A literature review.
Dunn, Justin; Yeo, Elizabeth; Moghaddampour, Parisah; Chau, Brian; Humbert, Sarah
2017-01-01
Phantom limb pain (PLP), the perception of discomfort in a limb no longer present, commonly occurs following amputation. A variety of interventions have been employed for PLP, including mirror therapy. Virtual Reality (VR) and augmented reality (AR) mirror therapy treatments have also been utilized and have the potential to provide an even greater immersive experience for the amputee. However, there is not currently a consensus on the efficacy of VR and AR therapy. The aim of this review is to evaluate and summarize the current research on the effect of immersive VR and AR in the treatment of PLP. A comprehensive literature search was conducted utilizing PubMed and Google Scholar in order to collect all available studies concerning the use of VR and/or AR in the treatment of PLP using the search terms "virtual reality," "augmented reality," and "phantom limb pain." Eight studies in total were evaluated, with six of those reporting quantitative data and the other two reporting qualitative findings. All studies located were of low-level evidence. Each noted improved pain with VR and AR treatment for phantom limb pain, through quantitative or qualitative reporting. Additionally, adverse effects were limited only to simulator sickness occurring in one trial for one patient. Despite the positive findings, all of the studies were confined purely to case studies and case report series. No studies of higher evidence have been conducted, thus considerably limiting the strength of the findings. As such, the current use of VR and AR for PLP management, while attractive due to the increasing levels of immersion, customizable environments, and decreasing cost, is yet to be fully proven and continues to need further research with higher quality studies to fully explore its benefits.
The effectiveness of virtual and augmented reality in health sciences and medical anatomy.
Moro, Christian; Štromberga, Zane; Raikos, Athanasios; Stirling, Allan
2017-11-01
Although cadavers constitute the gold standard for teaching anatomy to medical and health science students, there are substantial financial, ethical, and supervisory constraints on their use. In addition, although anatomy remains one of the fundamental areas of medical education, universities have decreased the hours allocated to teaching gross anatomy in favor of applied clinical work. The release of virtual (VR) and augmented reality (AR) devices allows learning to occur through hands-on immersive experiences. The aim of this research was to assess whether learning structural anatomy utilizing VR or AR is as effective as tablet-based (TB) applications, and whether these modes allowed enhanced student learning, engagement and performance. Participants (n = 59) were randomly allocated to one of the three learning modes: VR, AR, or TB and completed a lesson on skull anatomy, after which they completed an anatomical knowledge assessment. Student perceptions of each learning mode and any adverse effects experienced were recorded. No significant differences were found between mean assessment scores in VR, AR, or TB. During the lessons however, VR participants were more likely to exhibit adverse effects such as headaches (25% in VR P < 0.05), dizziness (40% in VR, P < 0.001), or blurred vision (35% in VR, P < 0.01). Both VR and AR are as valuable for teaching anatomy as tablet devices, but also promote intrinsic benefits such as increased learner immersion and engagement. These outcomes show great promise for the effective use of virtual and augmented reality as means to supplement lesson content in anatomical education. Anat Sci Educ 10: 549-559. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Evaluating the use of augmented reality to support undergraduate student learning in geomorphology
NASA Astrophysics Data System (ADS)
Ockelford, A.; Bullard, J. E.; Burton, E.; Hackney, C. R.
2016-12-01
Augmented Reality (AR) supports the understanding of complex phenomena by providing unique visual and interactive experiences that combine real and virtual information and help communicate abstract problems to learners. With AR, designers can superimpose virtual graphics over real objects, allowing users to interact with digital content through physical manipulation. One of the most significant pedagogic features of AR is that it provides an essentially student-centred and flexible space in which students can learn. By actively engaging participants using a design-thinking approach, this technology has the potential to provide a more productive and engaging learning environment than real or virtual learning environments alone. AR is increasingly being used in support of undergraduate learning and public engagement activities across engineering, medical and humanities disciplines but it is not widely used across the geosciences disciplines despite the obvious applicability. This paper presents preliminary results from a multi-institutional project which seeks to evaluate the benefits and challenges of using an augmented reality sand box to support undergraduate learning in geomorphology. The sandbox enables users to create and visualise topography. As the sand is sculpted, contours are projected onto the miniature landscape. By hovering a hand over the box, users can make it `rain' over the landscape and the water `flows' down in to rivers and valleys. At undergraduate level, the sand-box is an ideal focus for problem-solving exercises, for example exploring how geomorphology controls hydrological processes, how such processes can be altered and the subsequent impacts of the changes for environmental risk. It is particularly valuable for students who favour a visual or kinesthetic learning style. Results presented in this paper discuss how the sandbox provides a complex interactive environment that encourages communication, collaboration and co-design.
Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer
2005-01-01
Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.
Teaching professionalism through virtual means.
McEvoy, Michelle; Butler, Bryan; MacCarrick, Geraldine
2012-02-01
Virtual patients are used across a variety of clinical disciplines for both teaching and assessment, but are they an appropriate environment in which to develop professional skills? This study aimed to evaluate students' perceived effectiveness of an online interactive virtual patient developed to augment a personal professional development curriculum, and to identify factors that would maximise the associated educational benefits. Student focus group discussions were conducted to explore students' views on the usefulness and acceptability of the virtual patient as an educational tool to teach professionalism, and to identify factors for improvement. A thematic content analysis was used to capture content and synthesise the range of opinions expressed. Overall there was a positive response to the virtual patient. The students recognised the need to teach and assess professionalism throughout their curriculum, and viewed the virtual patient as a potentially engaging and valuable addition to their curriculum. We identified factors for improvement to guide the development of future virtual patients. It is possible to improve approaches to teaching and learning professionalism by exploring students' views on innovative teaching developments designed to augment personal professional development curricula. © Blackwell Publishing Ltd 2012.
Virtual Technologies Trends in Education
ERIC Educational Resources Information Center
Martín-Gutiérrez, Jorge; Mora, Carlos Efrén; Añorbe-Díaz, Beatriz; González-Marrero, Antonio
2017-01-01
Virtual reality captures people's attention. This technology has been applied in many sectors such as medicine, industry, education, video games, or tourism. Perhaps its biggest area of interest has been leisure and entertainment. Regardless the sector, the introduction of virtual or augmented reality had several constraints: it was expensive, it…
Immersive Education, an Annotated Webliography
ERIC Educational Resources Information Center
Pricer, Wayne F.
2011-01-01
In this second installment of a two-part feature on immersive education a webliography will provide resources discussing the use of various types of computer simulations including: (a) augmented reality, (b) virtual reality programs, (c) gaming resources for teaching with technology, (d) virtual reality lab resources, (e) virtual reality standards…
Augmented reality in the surgery of cerebral aneurysms: a technical report.
Cabrilo, Ivan; Bijlenga, Philippe; Schaller, Karl
2014-06-01
Augmented reality is the overlay of computer-generated images on real-world structures. It has previously been used for image guidance during surgical procedures, but it has never been used in the surgery of cerebral aneurysms. To report our experience of cerebral aneurysm surgery aided by augmented reality. Twenty-eight patients with 39 unruptured aneurysms were operated on in a prospective manner with augmented reality. Preoperative 3-dimensional image data sets (angio-magnetic resonance imaging, angio-computed tomography, and 3-dimensional digital subtraction angiography) were used to create virtual segmentations of patients' vessels, aneurysms, aneurysm necks, skulls, and heads. These images were injected intraoperatively into the eyepiece of the operating microscope. An example case of an unruptured posterior communicating artery aneurysm clipping is illustrated in a video. The described operating procedure allowed continuous monitoring of the accuracy of patient registration with neuronavigation data and assisted in the performance of tailored surgical approaches and optimal clipping with minimized exposition. Augmented reality may add to the performance of a minimally invasive approach, although further studies need to be performed to evaluate whether certain groups of aneurysms are more likely to benefit from it. Further technological development is required to improve its user friendliness.
Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery.
Pelargos, Panayiotis E; Nagasawa, Daniel T; Lagman, Carlito; Tenn, Stephen; Demos, Joanna V; Lee, Seung J; Bui, Timothy T; Barnette, Natalie E; Bhatt, Nikhilesh S; Ung, Nolan; Bari, Ausaf; Martin, Neil A; Yang, Isaac
2017-01-01
Neurosurgery has undergone a technological revolution over the past several decades, from trephination to image-guided navigation. Advancements in virtual reality (VR) and augmented reality (AR) represent some of the newest modalities being integrated into neurosurgical practice and resident education. In this review, we present a historical perspective of the development of VR and AR technologies, analyze its current uses, and discuss its emerging applications in the field of neurosurgery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Augmented reality for breast imaging.
Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio
2018-06-01
Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. Gadolinium was injected as a contrast agent (0.1 mmol/kg at 2 mL/s) using a programmable power injector. Dicom formatted images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into augmented reality images. ABI demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. ABI can improve clinical outcomes, providing an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.
See-through 3D technology for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young
2017-06-01
Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.
The Evolving Virtual Library: Visions and Case Studies.
ERIC Educational Resources Information Center
Saunders, Laverna M., Ed.
This book addresses many of the practical issues involved in developing the virtual library. Seven presentations from the Eighth Annual Computers in Libraries Conference are included in this book in augmented form. The papers are supplemented by "The Evolving Virtual Library: An Overview" (Laverna M. Saunders and Maurice Mitchell), a…
Human responses to augmented virtual scaffolding models.
Hsiao, Hongwei; Simeonov, Peter; Dotson, Brian; Ammons, Douglas; Kau, Tsui-Ying; Chiou, Sharon
2005-08-15
This study investigated the effect of adding real planks, in virtual scaffolding models of elevation, on human performance in a surround-screen virtual reality (SSVR) system. Twenty-four construction workers and 24 inexperienced controls performed walking tasks on real and virtual planks at three virtual heights (0, 6 m, 12 m) and two scaffolding-platform-width conditions (30, 60 cm). Gait patterns, walking instability measurements and cardiovascular reactivity were assessed. The results showed differences in human responses to real vs. virtual planks in walking patterns, instability score and heart-rate inter-beat intervals; it appeared that adding real planks in the SSVR virtual scaffolding model enhanced the quality of SSVR as a human - environment interface research tool. In addition, there were significant differences in performance between construction workers and the control group. The inexperienced participants were more unstable as compared to construction workers. Both groups increased their stride length with repetitions of the task, indicating a possibly confidence- or habit-related learning effect. The practical implications of this study are in the adoption of augmented virtual models of elevated construction environments for injury prevention research, and the development of programme for balance-control training to reduce the risk of falls at elevation before workers enter a construction job.
3D interactive augmented reality-enhanced digital learning systems for mobile devices
NASA Astrophysics Data System (ADS)
Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie
2013-03-01
With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.
NASA Astrophysics Data System (ADS)
Cheok, Adrian David
This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
Chen, Xiaojun; Xu, Lu; Wang, Yiping; Wang, Huixiang; Wang, Fang; Zeng, Xiangsen; Wang, Qiugen; Egger, Jan
2015-06-01
The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements. Copyright © 2015 Elsevier Inc. All rights reserved.
Augmented Reality for Close Quarters Combat
None
2018-01-16
Sandia National Laboratories has developed a state-of-the-art augmented reality training system for close-quarters combat (CQB). This system uses a wearable augmented reality system to place the user in a real environment while engaging enemy combatants in virtual space (Boston Dynamics DI-Guy). Umbra modeling and simulation environment is used to integrate and control the AR system.
Augmenting Your Own Reality: Student Authoring of Science-Based Augmented Reality Games
ERIC Educational Resources Information Center
Klopfer, Eric; Sheldon, Josh
2010-01-01
Augmented Reality (AR) simulations superimpose a virtual overlay of data and interactions onto a real-world context. The simulation engine at the heart of this technology is built to afford elements of game play that support explorations and learning in students' natural context--their own community and surroundings. In one of the more recent…
Learning Anatomy via Mobile Augmented Reality: Effects on Achievement and Cognitive Load
ERIC Educational Resources Information Center
Küçük, Sevda; Kapakin, Samet; Göktas, Yüksel
2016-01-01
Augmented reality (AR), a new generation of technology, has attracted the attention of educators in recent years. In this study, a MagicBook was developed for a neuroanatomy topic by using mobile augmented reality (mAR) technology. This technology integrates virtual learning objects into the real world and allow users to interact with the…
Plessas, Anastasios
2017-10-01
In preclinical dental education, the acquisition of clinical, technical skills, and the transfer of these skills to the clinic are paramount. Phantom heads provide an efficient way to teach preclinical students dental procedures safely while increasing their dexterity skills considerably. Modern computerized phantom head training units incorporate features of virtual reality technology and the ability to offer concurrent augmented feedback. The aims of this review were to examine and evaluate the dental literature for evidence supporting their use and to discuss the role of augmented feedback versus the facilitator's instruction. Adjunctive training in these units seems to enhance student's learning and skill acquisition and reduce the required faculty supervision time. However, the virtual augmented feedback cannot be used as the sole method of feedback, and the facilitator's input is still critical. Well-powered longitudinal randomized trials exploring the impact of these units on student's clinical performance and issues of cost-effectiveness are warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Anez, Francisco
This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less
Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos
2014-06-05
Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.
Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality
NASA Astrophysics Data System (ADS)
Wimmer, Felix; Bichlmeier, Christoph; Heining, Sandro M.; Navab, Nassir
The intent of medical Augmented Reality (AR) is to augment the surgeon's real view on the patient with the patient's interior anatomy resulting from a suitable visualization of medical imaging data. This paper presents a fast and user-defined clipping technique for medical AR allowing for cutting away any parts of the virtual anatomy and images of the real part of the AR scene hindering the surgeon's view onto the deepseated region of interest. Modeled on cut-away techniques from scientific illustrations and computer graphics, the method creates a fixed vision channel to the inside of the patient. It enables a clear view on the focussed virtual anatomy and moreover improves the perception of spatial depth.
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
archAR: an archaeological augmented reality experience
NASA Astrophysics Data System (ADS)
Wiley, Bridgette; Schulze, Jürgen P.
2015-03-01
We present an application for Android phones or tablets called "archAR" that uses augmented reality as an alternative, portable way of viewing archaeological information from UCSD's Levantine Archaeology Laboratory. archAR provides a unique experience of flying through an archaeological dig site in the Levantine area and exploring the artifacts uncovered there. Using a Google Nexus tablet and Qualcomm's Vuforia API, we use an image target as a map and overlay a three-dimensional model of the dig site onto it, augmenting reality such that we are able to interact with the plotted artifacts. The user can physically move the Android device around the image target and see the dig site model from any perspective. The user can also move the device closer to the model in order to "zoom" into the view of a particular section of the model and its associated artifacts. This is especially useful, as the dig site model and the collection of artifacts are very detailed. The artifacts are plotted as points, colored by type. The user can touch the virtual points to trigger a popup information window that contains details of the artifact, such as photographs, material descriptions, and more.
Social Virtual Worlds for Technology-Enhanced Learning on an Augmented Learning Platform
ERIC Educational Resources Information Center
Jin, Li; Wen, Zhigang; Gough, Norman
2010-01-01
Virtual worlds have been linked with e-learning applications to create virtual learning environments (VLEs) for the past decade. However, while they can support many educational activities that extend both traditional on-campus teaching and distance learning, they are used primarily for learning content generated and managed by instructors. With…
The Potential for Scientific Collaboration in Virtual Ecosystems
ERIC Educational Resources Information Center
Magerko, Brian
2010-01-01
This article explores the potential benefits of creating "virtual ecosystems" from real-world data. These ecosystems are intended to be realistic virtual representations of environments that may be costly or difficult to access in person. They can be constructed as 3D worlds rendered from stereo video data, augmented with scientific data, and then…
Real-Time View Correction for Mobile Devices.
Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc
2017-11-01
We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.
Lam, Chee Kiang; Sundaraj, Kenneth; Sulaiman, Mohd Nazri; Qamarruddin, Fazilawati A
2016-06-14
Computer based surgical training is believed to be capable of providing a controlled virtual environment for medical professionals to conduct standardized training or new experimental procedures on virtual human body parts, which are generated and visualised three-dimensionally on a digital display unit. The main objective of this study was to conduct virtual phacoemulsification cataract surgery to compare performance by users with different proficiency on a virtual reality platform equipped with a visual guidance system and a set of performance parameters. Ten experienced ophthalmologists and six medical residents were invited to perform the virtual surgery of the four main phacoemulsification cataract surgery procedures - 1) corneal incision (CI), 2) capsulorhexis (C), 3) phacoemulsification (P), and 4) intraocular lens implantation (IOL). Each participant was required to perform the complete phacoemulsification cataract surgery using the simulator for three consecutive trials (a standardized 30-min session). The performance of the participants during the three trials was supported using a visual guidance system and evaluated by referring to a set of parameters that was implemented in the performance evaluation system of the simulator. Subjects with greater experience obtained significantly higher scores in all four main procedures - CI1 (ρ = 0.038), CI2 (ρ = 0.041), C1 (ρ = 0.032), P2 (ρ = 0.035) and IOL1 (ρ = 0.011). It was also found that experience improved the completion times in all modules - CI4 (ρ = 0.026), C4 (ρ = 0.018), P6 (ρ = 0.028) and IOL4 (ρ = 0.029). Positive correlation was observed between experience and anti-tremor - C2 (ρ = 0.026), P3 (ρ = 0.015), P4 (ρ = 0.042) and IOL2 (ρ = 0.048) and similarly with anti-rupture - CI3 (ρ = 0.013), C3 (ρ = 0.027), P5 (ρ = 0.021) and IOL3 (ρ = 0.041). No significant difference was observed between the groups with regards to P1 (ρ = 0.077). Statistical analysis of the results obtained from repetitive trials between two groups of users reveal that augmented virtual reality (VR) simulators have the potential and capability to be used as a feasible proficiency assessment tool for the complete four main procedures of phacoemulsification cataract surgery (ρ < 0.05), indicating the construct validity of the modules simulated with augmented visual guidance and assessed through performance parameters.
What is going on in augmented reality simulation in laparoscopic surgery?
Botden, Sanne M B I; Jakimowicz, Jack J
2009-08-01
To prevent unnecessary errors and adverse results of laparoscopic surgery, proper training is of paramount importance. A safe way to train surgeons for laparoscopic skills is simulation. For this purpose traditional box trainers are often used, however they lack objective assessment of performance. Virtual reality laparoscopic simulators assess performance, but lack realistic haptic feedback. Augmented reality (AR) combines a virtual reality (VR) setting with real physical materials, instruments, and feedback. This article presents the current developments in augmented reality laparoscopic simulation. Pubmed searches were performed to identify articles regarding surgical simulation and augmented reality. Identified companies manufacturing an AR laparoscopic simulator received the same questionnaire referring to the features of the simulator. Seven simulators that fitted the definition of augmented reality were identified during the literature search. Five of the approached manufacturers returned a completed questionnaire, of which one simulator appeared to be VR and was therefore not applicable for this review. Several augmented reality simulators have been developed over the past few years and they are improving rapidly. We recommend the development of AR laparoscopic simulators for component tasks of procedural training. AR simulators should be implemented in current laparoscopic training curricula, in particular for laparoscopic suturing training.
Rothbaum, Barbara Olasov; Price, Matthew; Jovanovic, Tanja; Norrholm, Seth D; Gerardi, Maryrose; Dunlop, Boadie; Davis, Michael; Bradley, Bekh; Duncan, Erica J; Rizzo, Albert; Ressler, Kerry J
2014-06-01
The authors examined the effectiveness of virtual reality exposure augmented with D-cycloserine or alprazolam, compared with placebo, in reducing posttraumatic stress disorder (PTSD) due to military trauma. After an introductory session, five sessions of virtual reality exposure were augmented with D-cycloserine (50 mg) or alprazolam (0.25 mg) in a double-blind, placebo-controlled randomized clinical trial for 156 Iraq and Afghanistan war veterans with PTSD. PTSD symptoms significantly improved from pre- to posttreatment across all conditions and were maintained at 3, 6, and 12 months. There were no overall differences in symptoms between D-cycloserine and placebo at any time. Alprazolam and placebo differed significantly on the Clinician-Administered PTSD Scale score at posttreatment and PTSD diagnosis at 3 months posttreatment; the alprazolam group showed a higher rate of PTSD (82.8%) than the placebo group (47.8%). Between-session extinction learning was a treatment-specific enhancer of outcome for the D-cycloserine group only. At posttreatment, the D-cycloserine group had the lowest cortisol reactivity and smallest startle response during virtual reality scenes. A six-session virtual reality treatment was associated with reduction in PTSD diagnoses and symptoms in Iraq and Afghanistan veterans, although there was no control condition for the virtual reality exposure. There was no advantage of D-cycloserine for PTSD symptoms in primary analyses. In secondary analyses, alprazolam impaired recovery and D-cycloserine enhanced virtual reality outcome in patients who demonstrated within-session learning. D-cycloserine augmentation reduced cortisol and startle reactivity more than did alprazolam or placebo, findings that are consistent with those in the animal literature.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
Kamel Boulos, Maged N; Lu, Zhihan; Guerrero, Paul; Jennett, Charlene; Steed, Anthony
2017-02-20
The latest generation of virtual and mixed reality hardware has rekindled interest in virtual reality GIS (VRGIS) and augmented reality GIS (ARGIS) applications in health, and opened up new and exciting opportunities and possibilities for using these technologies in the personal and public health arenas. From smart urban planning and emergency training to Pokémon Go, this article offers a snapshot of some of the most remarkable VRGIS and ARGIS solutions for tackling public and environmental health problems, and bringing about safer and healthier living options to individuals and communities. The article also covers the main technical foundations and issues underpinning these solutions.
Application of Virtual, Augmented, and Mixed Reality to Urology.
Hamacher, Alaric; Kim, Su Jin; Cho, Sung Tae; Pardeshi, Sunil; Lee, Seung Hyun; Eun, Sung-Jong; Whangbo, Taeg Keun
2016-09-01
Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected.
Application of Virtual, Augmented, and Mixed Reality to Urology
2016-01-01
Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected. PMID:27706017
Magic cards: a new augmented-reality approach.
Demuynck, Olivier; Menendez, José Manuel
2013-01-01
Augmented reality (AR) commonly uses markers for detection and tracking. Such multimedia applications associate each marker with a virtual 3D model stored in the memory of the camera-equipped device running the application. Application users are limited in their interactions, which require knowing how to design and program 3D objects. This generally prevents them from developing their own entertainment AR applications. The Magic Cards application solves this problem by offering an easy way to create and manage an unlimited number of virtual objects that are encoded on special markers.
The Next Wave: Humans, Computers, and Redefining Reality
NASA Technical Reports Server (NTRS)
Little, William
2018-01-01
The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.
Pose tracking for augmented reality applications in outdoor archaeological sites
NASA Astrophysics Data System (ADS)
Younes, Georges; Asmar, Daniel; Elhajj, Imad; Al-Harithy, Howayda
2017-01-01
In recent years, agencies around the world have invested huge amounts of effort toward digitizing many aspects of the world's cultural heritage. Of particular importance is the digitization of outdoor archaeological sites. In the spirit of valorization of this digital information, many groups have developed virtual or augmented reality (AR) computer applications themed around a particular archaeological object. The problem of pose tracking in outdoor AR applications is addressed. Different positional systems are analyzed, resulting in the selection of a monocular camera-based user tracker. The limitations that challenge this technique from map generation, scale, anchoring, to lighting conditions are analyzed and systematically addressed. Finally, as a case study, our pose tracking system is implemented within an AR experience in the Byblos Roman theater in Lebanon.
Mixed reality ventriculostomy simulation: experience in neurosurgical residency.
Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A
2014-12-01
Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Malatesta, S. G.; Lella, F.; Fanini, B.; Sala, F.; Dodero, E.; Petacco, L.
2018-05-01
The best way to disseminate culture is, nowadays, the creation of scenarios with virtual and augmented reality that supply the visitors of museums with a powerful, interactive tool that allows to learn sometimes difficult concepts in an easy, entertaining way. 3D models derived from reality-based techniques are nowadays used to preserve, document and restore historical artefacts. These digital contents are also powerful instrument to interactively communicate their significance to non-specialist, making easier to understand concepts sometimes complicated or not clear. Virtual and Augmented Reality are surely a valid tool to interact with 3D models and a fundamental help in making culture more accessible to the wide public. These technologies can help the museum curators to adapt the cultural proposal and the information about the artefacts based on the different type of visitor's categories. These technologies allow visitors to travel through space and time and have a great educative function permitting to explain in an easy and attractive way information and concepts that could prove to be complicated. The aim of this paper is to create a virtual scenario and an augmented reality app to recreate specific spaces in the Capitoline Museum in Rome as they were during Winckelmann's time, placing specific statues in their original position in the 18th century.
Virtual imaging in sports broadcasting: an overview
NASA Astrophysics Data System (ADS)
Tan, Yi
2003-04-01
Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.
Nomura, Tsutomu; Mamada, Yasuhiro; Nakamura, Yoshiharu; Matsutani, Takeshi; Hagiwara, Nobutoshi; Fujita, Isturo; Mizuguchi, Yoshiaki; Fujikura, Terumichi; Miyashita, Masao; Uchida, Eiji
2015-11-01
Definitive assessment of laparoscopic skill improvement after virtual reality simulator training is best obtained during an actual operation. However, this is impossible in medical students. Therefore, we developed an alternative assessment technique using an augmented reality simulator. Nineteen medical students completed a 6-week training program using a virtual reality simulator (LapSim). The pretest and post-test were performed using an object-positioning module and cholecystectomy on an augmented reality simulator(ProMIS). The mean performance measures between pre- and post-training on the LapSim were compared with a paired t-test. In the object-positioning module, the execution time of the task (P < 0.001), left and right instrument path length (P = 0.001), and left and right instrument economy of movement (P < 0.001) were significantly shorter after than before the LapSim training. With respect to improvement in laparoscopic cholecystectomy using a gallbladder model, the execution time to identify, clip, and cut the cystic duct and cystic artery as well as the execution time to dissect the gallbladder away from the liver bed were both significantly shorter after than before the LapSim training (P = 0.01). Our training curriculum using a virtual reality simulator improved the operative skills of medical students as objectively evaluated by assessment using an augmented reality simulator instead of an actual operation. We hope that these findings help to establish an effective training program for medical students. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.
Sun, Guo-Chen; Wang, Fei; Chen, Xiao-Lei; Yu, Xin-Guang; Ma, Xiao-Dong; Zhou, Ding-Biao; Zhu, Ru-Yuan; Xu, Bai-Nan
2016-12-01
The utility of virtual and augmented reality based on functional neuronavigation and intraoperative magnetic resonance imaging (MRI) for glioma surgery has not been previously investigated. The study population consisted of 79 glioma patients and 55 control subjects. Preoperatively, the lesion and related eloquent structures were visualized by diffusion tensor tractography and blood oxygen level-dependent functional MRI. Intraoperatively, microscope-based functional neuronavigation was used to integrate the reconstructed eloquent structure and the real head and brain, which enabled safe resection of the lesion. Intraoperative MRI was used to verify brain shift during the surgical process and provided quality control during surgery. The control group underwent surgery guided by anatomic neuronavigation. Virtual and augmented reality protocols based on functional neuronavigation and intraoperative MRI provided useful information for performing tailored and optimized surgery. Complete resection was achieved in 55 of 79 (69.6%) glioma patients and 20 of 55 (36.4%) control subjects, with average resection rates of 95.2% ± 8.5% and 84.9% ± 15.7%, respectively. Both the complete resection rate and average extent of resection differed significantly between the 2 groups (P < 0.01). Postoperatively, the rate of preservation of neural functions (motor, visual field, and language) was lower in controls than in glioma patients at 2 weeks and 3 months (P < 0.01). Combining virtual and augmented reality based on functional neuronavigation and intraoperative MRI can facilitate resection of gliomas involving eloquent areas. Copyright © 2016 Elsevier Inc. All rights reserved.
A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.
Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe
2008-08-05
The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.
NASA Astrophysics Data System (ADS)
Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.
Percutaneous spinal fixation simulation with virtual reality and haptics.
Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z
2013-01-01
In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.
White Paper for Virtual Control Room
NASA Technical Reports Server (NTRS)
Little, William; Tully-Hanson, Benjamin
2015-01-01
The Virtual Control Room (VCR) Proof of Concept (PoC) project is the result of an award given by the Fourth Annual NASA T&I Labs Challenge Project Call. This paper will outline the work done over the award period to build and enhance the capabilities of the Augmented/Virtual Reality (AVR) Lab at NASA's Kennedy Space Center (KSC) to create the VCR.
Fundamental arthroscopic skill differentiation with virtual reality simulation.
Rose, Kelsey; Pedowitz, Robert
2015-02-01
The purpose of this study was to investigate the use and validity of virtual reality modules as part of the educational approach to mastering arthroscopy in a safe environment by assessing the ability to distinguish between experience levels. Additionally, the study aimed to evaluate whether experts have greater ambidexterity than do novices. Three virtual reality modules (Swemac/Augmented Reality Systems, Linkoping, Sweden) were created to test fundamental arthroscopic skills. Thirty participants-10 experts consisting of faculty, 10 intermediate participants consisting of orthopaedic residents, and 10 novices consisting of medical students-performed each exercise. Steady and Telescope was designed to train centering and image stability. Steady and Probe was designed to train basic triangulation. Track and Moving Target was designed to train coordinated motions of arthroscope and probe. Metrics reflecting speed, accuracy, and efficiency of motion were used to measure construct validity. Steady and Probe and Track a Moving Target both exhibited construct validity, with better performance by experts and intermediate participants than by novices (P < .05), whereas Steady and Telescope did not show validity. There was an overall trend toward better ambidexterity as a function of greater surgical experience, with experts consistently more proficient than novices throughout all 3 modules. This study represents a new way to assess basic arthroscopy skills using virtual reality modules developed through task deconstruction. Participants with the most arthroscopic experience performed better and were more consistent than novices on all 3 virtual reality modules. Greater arthroscopic experience correlates with more symmetry of ambidextrous performance. However, further adjustment of the modules may better simulate fundamental arthroscopic skills and discriminate between experience levels. Arthroscopy training is a critical element of orthopaedic surgery resident training. Developing techniques to safely and effectively train these skills is critical for patient safety and resident education. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Fisher, J Brian; Porter, Susan M
2002-01-01
This paper describes an application of a display approach which uses chromakey techniques to composite real and computer-generated images allowing a user to see his hands and medical instruments collocated with the display of virtual objects during a medical training simulation. Haptic feedback is provided through the use of a PHANTOM force feedback device in addition to tactile augmentation, which allows the user to touch virtual objects by introducing corresponding real objects in the workspace. A simplified catheter introducer insertion simulation was developed to demonstrate the capabilities of this approach.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
2014-01-01
Background Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. Methods In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. Results The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider’s lower extremities. Conclusions The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders. PMID:24902780
Lin, Wei-Shao; Harris, Bryan T; Phasuk, Kamolphob; Llop, Daniel R; Morton, Dean
2018-02-01
This clinical report describes a digital workflow using the virtual smile design approach augmented with a static 3-dimensional (3D) virtual patient with photorealistic appearance to restore maxillary central incisors by using computer-aided design and computer-aided manufacturing (CAD-CAM) monolithic lithium disilicate ceramic veneers. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Performance Of The IEEE 802.15.4 Protocol As The Marker Of Augmented Reality In Museum
NASA Astrophysics Data System (ADS)
Kurniawan Saputro, Adi; Sumpeno, Surya; Hariadi, Mochamad
2018-04-01
Museum is a place to keep the historic objects and historical education center to introduce the nation’s culture. Utilizing technology in a museum to become a smart city is a challenge. Internet of thing (IOT) is a technological advance in Information and communication (ICT) that can be applied in the museum The current ICT development is not only a transmission medium, but Augmented Reality technology is also being developed. Currently, Augmented Reality technology creates virtual objects into the real world using markers or images. In this study, researcher used signals to make virtual objects appear in the real world using the IEEE 802.14.5 protocol replacing the Augmented Reality marker. RSSI and triangulation are used as a substitute microlocation for AR objects. The result is the performance of Wireless Sensor Network could be used for data transmission in the museum. LOS research at a distance of 15 meters with 1000 ms delay found 1.4% error rate and NLOS with 2.3% error rate. So it can be concluded that utilization technology (IOT) using signal wireless sensor network as a replace for marker augmented reality can be used in museum
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
ERIC Educational Resources Information Center
Chen, Yu-Hsuan; Wang, Chang-Hwa
2018-01-01
Although research has indicated that augmented reality (AR)-facilitated instruction improves learning performance, further investigation of the usefulness of AR from a psychological perspective has been recommended. Researchers consider presence a major psychological effect when users are immersed in virtual reality environments. However, most…
Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization
NASA Astrophysics Data System (ADS)
Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.
2016-06-01
The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.
Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques
2017-07-01
Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
NASA Astrophysics Data System (ADS)
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
Augmented Reality Robot-assisted Radical Prostatectomy: Preliminary Experience.
Porpiglia, Francesco; Fiori, Cristian; Checcucci, Enrico; Amparore, Daniele; Bertolo, Riccardo
2018-05-01
To present our preliminary experience with augmented reality robot-assisted radical prostatectomy (AR-RARP). From June to August 2017, patients candidate to RARP were enrolled and underwent high-resolution multi-parametric magnetic resonance imaging (1-mm slices) according to dedicated protocol. The obtained three-dimensional (3D) reconstruction was integrated in the robotic console to perform AR-RARP. According to the staging at magnetic resonance imaging or reconstruction, in case of cT2 prostate cancer, intrafascial nerve sparing (NS) was performed: a mark was placed on the prostate capsule to indicate the virtual underlying intraprostatic lesion; in case of cT3, standard NS AR-RARP was scheduled with AR-guided biopsy at the level of suspected extracapsular extension (ECE). Prostate specimens were scanned to assess the 3D model concordance. Sixteen patients underwent intrafascial NS technique (cT2), whereas 14 underwent standard NS+ selective biopsy of suspected ECE (cT3). Final pathology confirmed clinical staging. Positive surgical margins' rate was 30% (no positive surgical margins in pT2). In patients whose intraprostatic lesions were marked, final pathology confirmed lesion location. In patients with suspected ECE, AR-guided selective biopsies confirmed the ECE location, with 11 of 14 biopsies (78%) positive for prostate cancer. Prostate specimens were scanned with finding of a good overlap. The mismatch between 3D reconstruction and scanning ranged from 1 to 5 mm. In 85% of the entire surface, the mismatch was <3 mm. In our preliminary experience, AR-RARP seems to be safe and effective. The accuracy of 3D reconstruction seemed to be promising. This technology has still limitations: the virtual models are manually oriented and rigid. Future collaborations with bioengineers will allow overcoming these limitations. Copyright © 2018 Elsevier Inc. All rights reserved.
Baker, Benjamin; Amin, Kavit; Chan, Adrian; Patel, Ketan; Wong, Jason
2016-01-01
The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena. PMID:28090510
Khor, Wee Sim; Baker, Benjamin; Amin, Kavit; Chan, Adrian; Patel, Ketan; Wong, Jason
2016-12-01
The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena.
Virtual Reality and Augmented Reality in Plastic Surgery: A Review.
Kim, Youngjun; Kim, Hannah; Kim, Yong Oock
2017-05-01
Recently, virtual reality (VR) and augmented reality (AR) have received increasing attention, with the development of VR/AR devices such as head-mounted displays, haptic devices, and AR glasses. Medicine is considered to be one of the most effective applications of VR/AR. In this article, we describe a systematic literature review conducted to investigate the state-of-the-art VR/AR technology relevant to plastic surgery. The 35 studies that were ultimately selected were categorized into 3 representative topics: VR/AR-based preoperative planning, navigation, and training. In addition, future trends of VR/AR technology associated with plastic surgery and related fields are discussed.
A telescope with augmented reality functions
NASA Astrophysics Data System (ADS)
Hou, Qichao; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian
2016-10-01
This study introduces a telescope with virtual reality (VR) and augmented reality (AR) functions. In this telescope, information on the micro-display screen is integrated to the reticule of telescope through a beam splitter and is then received by the observer. The design and analysis of telescope optical system with AR and VR ability is accomplished and the opto-mechanical structure is designed. Finally, a proof-of-concept prototype is fabricated and demonstrated. The telescope has an exit pupil diameter of 6 mm at an eye relief of 19 mm, 6° field of view, 5 to 8 times visual magnification , and a 30° field of view of the virtual image.
Virtual Reality and Augmented Reality in Plastic Surgery: A Review
Kim, Youngjun; Kim, Hannah
2017-01-01
Recently, virtual reality (VR) and augmented reality (AR) have received increasing attention, with the development of VR/AR devices such as head-mounted displays, haptic devices, and AR glasses. Medicine is considered to be one of the most effective applications of VR/AR. In this article, we describe a systematic literature review conducted to investigate the state-of-the-art VR/AR technology relevant to plastic surgery. The 35 studies that were ultimately selected were categorized into 3 representative topics: VR/AR-based preoperative planning, navigation, and training. In addition, future trends of VR/AR technology associated with plastic surgery and related fields are discussed. PMID:28573091
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-02-01
Head-mounted light field displays render a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. They are capable of rendering correct or nearly correct focus cues and addressing the very well-known vergence-accommodation mismatch problem in conventional virtual and augmented reality displays. In this talk, I will focus on reviewing recent advancements of head-mounted light field displays for VR and AR applications. I will demonstrate examples of HMD systems developed in my group.
Augmented reality (AR) and virtual reality (VR) applied in dentistry.
Huang, Ta-Ko; Yang, Chi-Hsun; Hsieh, Yu-Hsin; Wang, Jen-Chyan; Hung, Chun-Cheng
2018-04-01
The OSCE is a reliable evaluation method to estimate the preclinical examination of dental students. The most ideal assessment for OSCE is used the augmented reality simulator to evaluate. This literature review investigated a recently developed in virtual reality (VR) and augmented reality (AR) starting of the dental history to the progress of the dental skill. As result of the lacking of technology, it needs to depend on other device increasing the success rate and decreasing the risk of the surgery. The development of tracking unit changed the surgical and educational way. Clinical surgery is based on mature education. VR and AR simultaneously affected the skill of the training lesson and navigation system. Widely, the VR and AR not only applied in the dental training lesson and surgery, but also improved all field in our life. Copyright © 2018. Published by Elsevier Taiwan.
Enhancing a Multi-body Mechanism with Learning-Aided Cues in an Augmented Reality Environment
NASA Astrophysics Data System (ADS)
Singh Sidhu, Manjit
2013-06-01
Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface
Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele
2017-01-01
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. PMID:28961198
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface.
Aleotti, Jacopo; Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele; Zappettini, Andrea
2017-09-29
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source.
Augmented reality in medical education?
Kamphuis, Carolien; Barsom, Esther; Schijven, Marlies; Christoph, Noor
2014-09-01
Learning in the medical domain is to a large extent workplace learning and involves mastery of complex skills that require performance up to professional standards in the work environment. Since training in this real-life context is not always possible for reasons of safety, costs, or didactics, alternative ways are needed to achieve clinical excellence. Educational technology and more specifically augmented reality (AR) has the potential to offer a highly realistic situated learning experience supportive of complex medical learning and transfer. AR is a technology that adds virtual content to the physical real world, thereby augmenting the perception of reality. Three examples of dedicated AR learning environments for the medical domain are described. Five types of research questions are identified that may guide empirical research into the effects of these learning environments. Up to now, empirical research mainly appears to focus on the development, usability and initial implementation of AR for learning. Limited review results reflect the motivational value of AR, its potential for training psychomotor skills and the capacity to visualize the invisible, possibly leading to enhanced conceptual understanding of complex causality.
NASA Technical Reports Server (NTRS)
Gutensohn, Michael
2018-01-01
The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.
Advanced Visual and Instruction Systems for Maintenance Support (AVIS-MS)
2006-12-01
Hayashi , "Augmentable Reality: Situated Communication through Physical and Digital Spaces," Proc. 2nd Int’l Symp. Wearable Computers, IEEE CS Press...H. Ohno , "An Optical See-through Display for Mutual Occlusion of Real and Virtual Environments," Proc. Int’l Symp. Augmented Reality 2000 (ISARO0
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Sandia National Laboratories has developed a state-of-the-art augmented reality training system for close-quarters combat (CQB). This system uses a wearable augmented reality system to place the user in a real environment while engaging enemy combatants in virtual space (Boston Dynamics DI-Guy). Umbra modeling and simulation environment is used to integrate and control the AR system.
The Design of Immersive English Learning Environment Using Augmented Reality
ERIC Educational Resources Information Center
Li, Kuo-Chen; Chen, Cheng-Ting; Cheng, Shein-Yung; Tsai, Chung-Wei
2016-01-01
The study uses augmented reality (AR) technology to integrate virtual objects into the real learning environment for language learning. The English AR classroom is constructed using the system prototyping method and evaluated by semi-structured in-depth interviews. According to the flow theory by Csikszenmihalyi in 1975 along with the immersive…
Enhancing and Transforming Global Learning Communities with Augmented Reality
ERIC Educational Resources Information Center
Frydenberg, Mark; Andone, Diana
2018-01-01
Augmented and virtual reality applications bring new insights to real world objects and scenarios. This paper shares research results of the TalkTech project, an ongoing study investigating the impact of learning about new technologies as members of global communities. This study shares results of a collaborative learning project about augmented…
A Collaborative Augmented Campus Based on Location-Aware Mobile Technology
ERIC Educational Resources Information Center
De Lucia, A.; Francese, R.; Passero, I.; Tortora, G.
2012-01-01
Mobile devices are changing the way people work and communicate. Most of the innovative devices offer the opportunity to integrate augmented reality in mobile applications, permitting the combination of the real world with virtual information. This feature can be particularly useful to enhance informal and formal didactic actions based on student…
Understanding the Conics through Augmented Reality
ERIC Educational Resources Information Center
Salinas, Patricia; Pulido, Ricardo
2017-01-01
This paper discusses the production of a digital environment to foster the learning of conics through augmented reality. The name conic refers to curves obtained by the intersection of a plane with a right circular conical surface. The environment gives students the opportunity to interact with the cone and the plane as virtual objects in real…
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-01-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye
2016-06-07
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-06-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
A Second Life for eHealth: Prospects for the Use of 3-D Virtual Worlds in Clinical Psychology
Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe
2008-01-01
The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed. PMID:18678557
Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback
NASA Astrophysics Data System (ADS)
Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve
2011-03-01
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
Magnetosensitive e-skins with directional perception for augmented reality
Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys
2018-01-01
Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121
Technological advances in robotic-assisted laparoscopic surgery.
Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K
2009-05-01
In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.
NASA Astrophysics Data System (ADS)
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
2017-10-01
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
Registration using natural features for augmented reality systems.
Yuan, M L; Ong, S K; Nee, A Y C
2006-01-01
Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.
NASA Astrophysics Data System (ADS)
Aubert, A. H.; Schnepel, O.; Kraft, P.; Houska, T.; Plesca, I.; Orlowski, N.; Breuer, L.
2015-11-01
This paper addresses education and communication in hydrology and geosciences. Many approaches can be used, such as the well-known seminars, modelling exercises and practical field work but out-door learning in our discipline is a must, and this paper focuses on the recent development of a new out-door learning tool at the landscape scale. To facilitate improved teaching and hands-on experience, we designed the Studienlandschaft Schwingbachtal. Equipped with field instrumentation, education trails, and geocache, we now implemented an augmented reality App, adding virtual teaching objects on the real landscape. The App development is detailed, to serve as methodology for people wishing to implement such a tool. The resulting application, namely the Schwingbachtal App, is described as an example. We conclude that such an App is useful for communication and education purposes, making learning pleasant, and offering personalized options.
An Interactive Augmented Reality Implementation of Hijaiyah Alphabet for Children Education
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Akbar, F.; Syahputra, M. F.; Budiman, M. A.; Hizriadi, A.
2018-03-01
Hijaiyah alphabet is letters used in the Qur’an. An attractive and exciting learning process of Hijaiyah alphabet is necessary for the children. One of the alternatives to create attractive and interesting learning process of Hijaiyah alphabet is to develop it into a mobile application using augmented reality technology. Augmented reality is a technology that combines two-dimensional or three-dimensional virtual objects into actual three-dimensional circles and projects them in real time. The purpose of application aims to foster the children interest in learning Hijaiyah alphabet. This application is using Smartphone and marker as the medium. It was built using Unity and augmented reality library, namely Vuforia, then using Blender as the 3D object modeling software. The output generated from this research is the learning application of Hijaiyah letters using augmented reality. How to use it is as follows: first, place marker that has been registered and printed; second, the smartphone camera will track the marker. If the marker is invalid, the user should repeat the tracking process. If the marker is valid and identified, the marker will have projected the objects of Hijaiyah alphabet in three-dimensional form. Lastly, the user can learn and understand the shape and pronunciation of Hijaiyah alphabet by touching the virtual button on the marker
Augmented reality in dentistry: a current perspective.
Kwon, Ho-Beom; Park, Young-Seok; Han, Jung-Suk
2018-02-21
Augmentation reality technology offers virtual information in addition to that of the real environment and thus opens new possibilities in various fields. The medical applications of augmentation reality are generally concentrated on surgery types, including neurosurgery, laparoscopic surgery and plastic surgery. Augmentation reality technology is also widely used in medical education and training. In dentistry, oral and maxillofacial surgery is the primary area of use, where dental implant placement and orthognathic surgery are the most frequent applications. Recent technological advancements are enabling new applications of restorative dentistry, orthodontics and endodontics. This review briefly summarizes the history, definitions, features, and components of augmented reality technology and discusses its applications and future perspectives in dentistry.
Virtual Environments: Issues and Opportunities for Researching Inclusive Educational Practices
NASA Astrophysics Data System (ADS)
Sheehy, Kieron
This chapter argues that virtual environments offer new research areas for those concerned with inclusive education. Further, it proposes that they also present opportunities for developing increasingly inclusive research processes. This chapter considers how researchers might approach researching some of these affordances. It discusses the relationship between specific features of inclusive pedagogy, derived from an international systematic literature review, and the affordances of different forms of virtual characters and environments. Examples are drawn from research in Second LifeTM (SL), virtual tutors and augmented reality. In doing this, the chapter challenges a simplistic notion of isolated physical and virtual worlds and, in the context of inclusion, between the practice of research and the research topic itself. There are a growing number of virtual worlds in which identified educational activities are taking place, or whose activities are being noted for their educational merit. These encompasses non-themed worlds such as SL and Active Worlds, game based worlds such as World of Warcraft and Runescape, and even Club Penguin, a themed virtual where younger players interact through a variety of Penguin themed environments and activities. It has been argued that these spaces, outside traditional education, are able to offer pedagogical insights (Twining 2009) i.e. that these global virtual communities have been identified as being useful as creative educational environments (Delwiche 2006; Sheehy 2009). This chapter will explore how researchers might use these spaces to investigative and create inclusive educational experiences for learners. In order to do this the chapter considers three interrelated issues: What is inclusive education?; How might inclusive education influence virtual world research? And, what might inclusive education look like in virtual worlds?
Display technologies for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang
2018-02-01
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Yoo, Ji Won; Lee, Dong Ryul; Cha, Young Joo; You, Sung Hyun
2017-01-01
The purpose of the present study was to compare therapeutic effects of an electromyography (EMG) biofeedback augmented by virtual reality (VR) and EMG biofeedback alone on the triceps and biceps (T:B) muscle activity imbalance and elbow joint movement coordination during a reaching motor taskOBJECTIVE: To compare therapeutic effects of an electromyography (EMG) biofeedback augmented by virtual reality (VR) and EMG biofeedback alone on the triceps and biceps muscle activity imbalance and elbow joint movement coordination during a reaching motor task in normal children and children with spastic cerebral palsy (CP). 18 children with spastic CP (2 females; mean±standard deviation = 9.5 ± 1.96 years) and 8 normal children (3 females; mean ± standard deviation = 9.75 ± 2.55 years) were recruited from a local community center. All children with CP first underwent one intensive session of EMG feedback (30 minutes), followed by one session of the EMG-VR feedback (30 minutes) after a 1-week washout period. Clinical tests included elbow extension range of motion (ROM), biceps muscle strength, and box and block test. EMG triceps and biceps (T:B) muscle activity imbalance and reaching movement acceleration coordination were concurrently determined by EMG and 3-axis accelerometer measurements respectively. Independent t-test and one-way repeated analysis of variance (ANOVA) were performed at p < 0.05. The one-way repeated ANOVA was revealed to be significantly effective in elbow extension ROM (p = 0.01), biceps muscle strength (p = 0.01), and box and block test (p = 0.03). The one-way repeated ANOVA also revealed to be significantly effective in the peak triceps muscle activity (p = 0.01). However, one-way repeated ANOVA produced no statistical significance in the composite 3-dimensional movement acceleration coordination data (p = 0.12). The present study is a first clinical trial that demonstrated the superior benefits of the EMG biofeedback when augmented by virtual reality exercise games in children with spastic CP. The augmented EMG and VR feedback produced better neuromuscular balance control in the elbow joint than the EMG biofeedback alone.
Learning Application of Astronomy Based Augmented Reality using Android Platform
NASA Astrophysics Data System (ADS)
Maleke, B.; Paseru, D.; Padang, R.
2018-02-01
Astronomy is a branch of science involving observations of celestial bodies such as stars, planets, nebular comets, star clusters, and galaxies as well as natural phenomena occurring outside the Earth’s atmosphere. The way of learning of Astronomy is quite varied, such as by using a book or observe directly with a telescope. But both ways of learning have shortcomings, for example learning through books is only presented in the form of interesting 2D drawings. While learning with a telescope requires a fairly expensive cost to buy the equipment. This study will present a more interesting way of learning from the previous one, namely through Augmented Reality (AR) application using Android platform. Augmented Reality is a combination of virtual world (virtual) and real world (real) made by computer. Virtual objects can be text, animation, 3D models or videos that are combined with the actual environment so that the user feels the virtual object is in his environment. With the use of the Android platform, this application makes the learning method more interesting because it can be used on various Android smartphones so that learning can be done anytime and anywhere. The methodology used in making applications is Multimedia Lifecycle, along with C # language for AR programming and flowchart as a modelling tool. The results of research on some users stated that this application can run well and can be used as an alternative way of learning Astronomy with more interesting.
ERIC Educational Resources Information Center
Menorath, Darren; Antonczak, Laurent
2017-01-01
This paper examines the state of the art of mobile Augmented Reality (AR) and mobile Virtual Reality (VR) in relation to collaboration and professional practices in a creative digital environment and higher education. To support their discussion, the authors use a recent design-based research project named "Juxtapose," which explores…
ERIC Educational Resources Information Center
Harley, Jason M.; Poitras, Eric G.; Jarrell, Amanda; Duffy, Melissa C.; Lajoie, Susanne P.
2016-01-01
Research on the effectiveness of augmented reality (AR) on learning exists, but there is a paucity of empirical work that explores the role that positive emotions play in supporting learning in such settings. To address this gap, this study compared undergraduate students' emotions and learning outcomes during a guided historical tour using mobile…
Visual Environment for Designing Interactive Learning Scenarios with Augmented Reality
ERIC Educational Resources Information Center
Mota, José Miguel; Ruiz-Rube, Iván; Dodero, Juan Manuel; Figueiredo, Mauro
2016-01-01
Augmented Reality (AR) technology allows the inclusion of virtual elements on a vision of actual physical environment for the creation of a mixed reality in real time. This kind of technology can be used in educational settings. However, the current AR authoring tools present several drawbacks, such as, the lack of a mechanism for tracking the…
Integrating Augmented Reality in Higher Education: A Multidisciplinary Study of Student Perceptions
ERIC Educational Resources Information Center
Delello, Julie A.; McWhorter, Rochell R.; Camp, Kerri M.
2015-01-01
Augmented reality (AR) is an emerging technology that blends physical objects with virtual reality. Through the integration of digital and print media, a gap between the "on and offline" worlds are merged, radically shifting student-computer interaction in the classroom. This research examined the results of a multiple case study on the…
Pursuit of X-ray Vision for Augmented Reality
2012-01-01
applications. Virtual Reality 15(2–3), 175–184 (2011) 29. Livingston, M.A., Swan II, J.E., Gabbard , J.L., Höllerer, T.H., Hix, D., Julier, S.J., Baillot, Y...Brown, D., Baillot, Y., Gabbard , J.L., Hix, D.: A perceptual matching technique for depth judgments in optical, see-through augmented reality. In: IEEE
ERIC Educational Resources Information Center
Dunleavy, Matt; Dede, Chris; Mitchell, Rebecca
2009-01-01
The purpose of this study was to document how teachers and students describe and comprehend the ways in which participating in an augmented reality (AR) simulation aids or hinders teaching and learning. Like the multi-user virtual environment (MUVE) interface that underlies Internet games, AR is a good medium for immersive collaborative…
Virtually Nursing: Emerging Technologies in Nursing Education.
Foronda, Cynthia L; Alfes, Celeste M; Dev, Parvati; Kleinheksel, A J; Nelson, Douglas A; OʼDonnell, John M; Samosky, Joseph T
Augmented reality and virtual simulation technologies in nursing education are burgeoning. Preliminary evidence suggests that these innovative pedagogical approaches are effective. The aim of this article is to present 6 newly emerged products and systems that may improve nursing education. Technologies may present opportunities to improve teaching efforts, better engage students, and transform nursing education.
ERIC Educational Resources Information Center
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-01-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Working Group Reports and Presentations: Virtual Worlds and Virtual Exploration
NASA Technical Reports Server (NTRS)
LAmoreaux, Claudia
2006-01-01
Scientists and engineers are continually developing innovative methods to capitalize on recent developments in computational power. Virtual worlds and virtual exploration present a new toolset for project design, implementation, and resolution. Replication of the physical world in the virtual domain provides stimulating displays to augment current data analysis techniques and to encourage public participation. In addition, the virtual domain provides stakeholders with a low cost, low risk design and test environment. The following document defines a virtual world and virtual exploration, categorizes the chief motivations for virtual exploration, elaborates upon specific objectives, identifies roadblocks and enablers for realizing the benefits, and highlights the more immediate areas of implementation (i.e. the action items). While the document attempts a comprehensive evaluation of virtual worlds and virtual exploration, the innovative nature of the opportunities presented precludes completeness. The authors strongly encourage readers to derive additional means of utilizing the virtual exploration toolset.
Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-05-01
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Mass production of holographic transparent components for augmented and virtual reality applications
NASA Astrophysics Data System (ADS)
Russo, Juan Manuel; Dimov, Fedor; Padiyar, Joy; Coe-Sullivan, Seth
2017-06-01
Diffractive optics such as holographic optical elements (HOEs) can provide transparent and narrow band components with arbitrary incident and diffracted angles for near-to-eye commercial electronic products for augmented reality (AR), virtual reality (VR), and smart glass applications. In this paper, we will summarize the operational parameters and general optical geometries relevant for near-to-eye displays, the holographic substrates available for these applications, and their performance characteristics and ease of manufacture. We will compare the holographic substrates available in terms of fabrication, manufacturability, and end-user performance characteristics. Luminit is currently emplacing the manufacturing capacity to serve this market, and this paper will discuss the capabilities and limitations of this unique facility.
An industrial approach to design compelling VR and AR experience
NASA Astrophysics Data System (ADS)
Richir, Simon; Fuchs, Philippe; Lourdeaux, Domitile; Buche, Cédric; Querrec, Ronan
2013-03-01
The convergence of technologies currently observed in the field of VR, AR, robotics and consumer electronic reinforces the trend of new applications appearing every day. But when transferring knowledge acquired from research to businesses, research laboratories are often at a loss because of a lack of knowledge of the design and integration processes in creating an industrial scale product. In fact, the innovation approaches that take a good idea from the laboratory to a successful industrial product are often little known to researchers. The objective of this paper is to present the results of the work of several research teams that have finalized a working method for researchers and manufacturers that allow them to design virtual or augmented reality systems and enable their users to enjoy "a compelling VR experience". That approach, called "the I2I method", present 11 phases from "Establishing technological and competitive intelligence and industrial property" to "Improvements" through the "Definition of the Behavioral Interface, Virtual Environment and Behavioral Software Assistance". As a result of the experience gained by various research teams, this design approach benefits from contributions from current VR and AR research. Our objective is to validate and continuously move such multidisciplinary design team methods forward.
The Reality of Virtual Reality Product Development
NASA Astrophysics Data System (ADS)
Dever, Clark
Virtual Reality and Augmented Reality are emerging areas of research and product development in enterprise companies. This talk will discuss industry standard tools and current areas of application in the commercial market. Attendees will gain insights into how to research, design, and (most importantly) ship, world class products. The presentation will recount the lessons learned to date developing a Virtual Reality tool to solve physics problems resulting from trying to perform aircraft maintenance on ships at sea.
Virtual Rehabilitation with Children: Challenges for Clinical Adoption [From the Field].
Glegg, Stephanie
2017-01-01
Virtual, augmented, and mixed reality environments are increasingly being developed and used to address functional rehabilitation goals related to physical, cognitive, social, and psychological impairments. For example, a child with an acquired brain injury may participate in virtual rehabilitation to address impairments in balance, attention, turn taking, and engagement in therapy. The trend toward virtual rehabilitation first gained momentum with the adoption of commercial off-the-shelf active video gaming consoles (e.g., Nintendo Wii and XBox). Now, we are seeing the rapid emergence of customized rehabilitation-specific systems that integrate technological advances in virtual reality, visual effects, motion tracking, physiological monitoring, and robotics.
Augmenting breath regulation using a mobile driven virtual reality therapy framework.
Abushakra, Ahmad; Faezipour, Miad
2014-05-01
This paper presents a conceptual framework of a virtual reality therapy to assist individuals, especially lung cancer patients or those with breathing disorders to regulate their breath through real-time analysis of respiration movements using a smartphone. Virtual reality technology is an attractive means for medical simulations and treatment, particularly for patients with cancer. The theories, methodologies and approaches, and real-world dynamic contents for all the components of this virtual reality therapy (VRT) via a conceptual framework using the smartphone will be discussed. The architecture and technical aspects of the offshore platform of the virtual environment will also be presented.
Fusion interfaces for tactical environments: An application of virtual reality technology
NASA Technical Reports Server (NTRS)
Haas, Michael W.
1994-01-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.
ERIC Educational Resources Information Center
Ong, Alex
2010-01-01
The use of augmented reality (AR) tools, where virtual objects such as tables and graphs can be displayed and be interacted with in real scenes created from imaging devices, in mainstream school curriculum is uncommon, as they are potentially costly and sometimes bulky. Thus, such learning tools are mainly applied in tertiary institutions, such as…
Quantitative Predictions of Binding Free Energy Changes in Drug-Resistant Influenza Neuraminidase
2012-08-30
drug resistance to two antiviral drugs, zanamivir and oseltamivir. We augmented molecular dynamics (MD) with Hamiltonian Replica Exchange and...conformations that are virtually identical to WT [10]. Molecular simulations that rigorously model the microscopic structure and thermodynamics PLOS...influenza neuraminidase (NA) that confer drug resistance to two antiviral drugs, zanamivir and oseltamivir. We augmented molecular dynamics (MD) with
PROJECT HEAVEN: Preoperative Training in Virtual Reality
Iamsakul, Kiratipath; Pavlovcik, Alexander V.; Calderon, Jesus I.; Sanderson, Lance M.
2017-01-01
A cephalosomatic anastomosis (CSA; also called HEAVEN: head anastomosis venture) has been proposed as an option for patients with neurological impairments, such as spinal cord injury (SCI), and terminal medical illnesses, for which medicine is currently powerless. Protocols to prepare a patient for life after CSA do not currently exist. However, methods used in conventional neurorehabilitation can be used as a reference for developing preparatory training. Studies on virtual reality (VR) technologies have documented VR's ability to enhance rehabilitation and improve the quality of recovery in patients with neurological disabilities. VR-augmented rehabilitation resulted in increased motivation towards performing functional training and improved the biopsychosocial state of patients. In addition, VR experiences coupled with haptic feedback promote neuroplasticity, resulting in the recovery of motor functions in neurologically-impaired individuals. To prepare the recipient psychologically for life after CSA, the development of VR experiences paired with haptic feedback is proposed. This proposal aims to innovate techniques in conventional neurorehabilitation to implement preoperative psychological training for the recipient of HEAVEN. Recipient's familiarity to body movements will prevent unexpected psychological reactions from occurring after the HEAVEN procedure. PMID:28540125
PROJECT HEAVEN: Preoperative Training in Virtual Reality.
Iamsakul, Kiratipath; Pavlovcik, Alexander V; Calderon, Jesus I; Sanderson, Lance M
2017-01-01
A cephalosomatic anastomosis (CSA; also called HEAVEN: head anastomosis venture) has been proposed as an option for patients with neurological impairments, such as spinal cord injury (SCI), and terminal medical illnesses, for which medicine is currently powerless. Protocols to prepare a patient for life after CSA do not currently exist. However, methods used in conventional neurorehabilitation can be used as a reference for developing preparatory training. Studies on virtual reality (VR) technologies have documented VR's ability to enhance rehabilitation and improve the quality of recovery in patients with neurological disabilities. VR-augmented rehabilitation resulted in increased motivation towards performing functional training and improved the biopsychosocial state of patients. In addition, VR experiences coupled with haptic feedback promote neuroplasticity, resulting in the recovery of motor functions in neurologically-impaired individuals. To prepare the recipient psychologically for life after CSA, the development of VR experiences paired with haptic feedback is proposed. This proposal aims to innovate techniques in conventional neurorehabilitation to implement preoperative psychological training for the recipient of HEAVEN. Recipient's familiarity to body movements will prevent unexpected psychological reactions from occurring after the HEAVEN procedure.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
NASA Astrophysics Data System (ADS)
Tan, Kian Lam; Lim, Chen Kim
2017-10-01
In the last decade, cultural heritage including historical sites are reconstructed into digital heritage. Based on UNESCO, digital heritage defines as "cultural, educational, scientific and administrative resources, as well as technical, legal, medical and other kinds of information created digitally, or converted into digital form from existing analogue resources". In addition, the digital heritage is doubling in size every two years and expected will grow tenfold between 2013 and 2020. In order to attract and stir the interest of younger generations about digital heritage, gamification has been widely promoted. In this research, a virtual walkthrough combine with gamifications are proposed for learning and exploring historical places in Malaysia by using mobile device. In conjunction with Visit Perak 2017 Campaign, this virtual walkthrough is proposed for Kellie's Castle at Perak. The objectives of this research is two folds 1) modelling and design of innovative mobile game for virtual walkthrough application, and 2) to attract tourist to explore and learn historical places by using sophisticated graphics from Augmented Reality. The efficiency and effectiveness of the mobile virtual walkthrough will be accessed by the International and local tourists. In conclusion, this research is speculated to be pervasively improve the cultural and historical knowledge of the learners.
Augmenting your own reality: student authoring of science-based augmented reality games.
Klopfer, Eric; Sheldon, Josh
2010-01-01
Augmented Reality (AR) simulations superimpose a virtual overlay of data and interactions onto a real-world context. The simulation engine at the heart of this technology is built to afford elements of game play that support explorations and learning in students' natural context--their own community and surroundings. In one of the more recent games, TimeLab 2100, players role-play citizens of the early 22nd century when global climate change is out of control. Through AR, they see their community as it might be nearly one hundred years in the future. TimeLab and other similar AR games balance location specificity and portability--they are games that are tied to a location and games that are movable from place to place. Focusing students on developing their own AR games provides the best of both virtual and physical worlds: a more portable solution that deeply connects young people to their own surroundings. A series of initiatives has focused on technical and pedagogical solutions to supporting students authoring their own games.
NASA Astrophysics Data System (ADS)
Ferderbar, Catherine A.
To develop sustainable solutions to remediate the complex ecological problems of earth's soil, water, and air degradation requires the talents and skills of knowledgeable, motivated people (UNESCO, 1977; UNESCO, 2010). Researchers historically emphasized that time spent in outdoor, nature activities (Wells & Lekies, 2006), particularly with an adult mentor (Chawla & Cushing, 2007), promotes environmental knowledge and nature-relatedness, precursors to environmental literacy. Research has also demonstrated that technology is integral to the lives of youth, who spend 7:38 hours daily (Rideout, et al., 2010), engaged in electronics. Educators would benefit from knowing if in-nature and virtual-nature field trip experiences provide comparable levels of knowledge and connectedness, to nurture student proenvironmentalism. To investigate field trip phenomena, the researcher studied the impact of virtual-nature and in-nature experiences during which students analyzed water quality along Midwestern rivers. The quasi-experimental, mixed method convergent parallel design with a purposeful sample (n=131) of middle school students from two Midwestern K-8 schools, utilized scientist participant observer field records and narrative response, written assessment aligned to field trip content to evaluate knowledge acquisition. To gain insight into student environmental dispositions, participant observers recorded student comments and behaviors throughout field trips. A survey, administered Pre-Treatment, Post-Treatment 1 and Post-Treatment 2, focused on family water-related behaviors and student perceptions of the need for local government water protection. The findings demonstrated both field trips increased content knowledge significantly, with large effect size. Content knowledge gain from one experience transferred to and was augmented by the second experience. Skill gain (technical and observational) varied by type of field trip and did not transfer. Technical skill was often paired with critical thinking/reasoning. Survey results demonstrated that the virtual-nature, in-nature order evinced a greater proenvironmental attitude and behavioral change. The initial experience provided greater proenvironmental impact, regardless of order. Several students exhibited a Connection to Life Experience that reinforced their nature-relatedness during either field trip. These findings inform best practices associated with environmental education. The implications include teacher-practitioner collaboration with IT personnel, naturalists, hydrologists, zoological and botanical experts, to design local, site-based virtual-nature and in-nature (or hybrid) field trips to nurture environmental literacy goals.
NASA Astrophysics Data System (ADS)
Mejías Borrero, A.; Andújar Márquez, J. M.
2012-10-01
Lab practices are an essential part of teaching in Engineering. However, traditional laboratory lessons developed in classroom labs (CL) must be adapted to teaching and learning strategies that go far beyond the common concept of e-learning, in the sense that completely virtualized distance education disconnects teachers and students from the real world, which can generate specific problems in laboratory classes. Current proposals of virtual labs (VL) and remote labs (RL) do not either cover new needs properly or contribute remarkable improvement to traditional labs—except that they favor distance training. Therefore, online teaching and learning in lab practices demand a further step beyond current VL and RL. This paper poses a new reality and new teaching/learning concepts in the field of lab practices in engineering. The developed augmented reality-based lab system (augmented remote lab, ARL) enables teachers and students to work remotely (Internet/intranet) in current CL, including virtual elements which interact with real ones. An educational experience was conducted to assess the developed ARL with the participation of a group of 10 teachers and another group of 20 students. Both groups have completed lab practices of the contents in the subjects Digital Systems and Robotics and Industrial Automation, which belong to the second year of the new degree in Electronic Engineering (adapted to the European Space for Higher Education). The labs were carried out by means of three different possibilities: CL, VL and ARL. After completion, both groups were asked to fill in some questionnaires aimed at measuring the improvement contributed by ARL relative to CL and VL. Except in some specific questions, the opinion of teachers and students was rather similar and positive regarding the use and possibilities of ARL. Although the results are still preliminary and need further study, seems to conclude that ARL remarkably improves the possibilities of current VL and RL. Furthermore, ARL can be concluded to allow further possibilities when used online than traditional laboratory lessons completed in CL.
Developing an Augmented Reality Environment for Earth Science Education
NASA Astrophysics Data System (ADS)
Pratt, M. J.; Skemer, P. A.; Arvidson, R. E.
2017-12-01
The emerging field of augmented reality (AR) provides new and exciting ways to explore geologic phenomena for research and education. The primary advantage of AR is that it allows users to physically explore complex three-dimensional structures that were previously inaccessible, for example a remote geologic outcrop or a mineral structure at the atomic scale. It is used, for example, with OnSight software during tactical operations to plan the Mars Curiosity rover's traverses by providing virtual views to walk through terrain and the rover at true scales. This mode of physical exploration allows users more freedom to investigate and understand the 3D structure than is possible on a flat computer screen, or within a static PowerPoint presentation during a classroom lecture. The Microsoft HoloLens headset provides the most-advanced, mobile AR platform currently available to developers. The Fossett Laboratory for Virtual Planetary Exploration at Washington University in St. Louis has applied this technology, coupled with photogrammetric software and the Unity 3D gaming engine, to develop photorealistic environments of 3D geologic outcrops from around the world. The untethered HoloLens provides an ideal platform for a classroom setting as it allows for shared experiences of the holograms of interest, projecting them in the same location for all users to explore. Furthermore, the HoloLens allows for face-to-face communication during use that is important in teaching, a feature that virtual reality does not allow. Our development of an AR application includes the design of an online database of photogrammetric outcrop models curated for the current limitations of AR technology. This database will be accessible to both those wishing to submit models, and is free to those wishing to use the application for teaching, outreach or research purposes.
Olfactory Stimuli Increase Presence in Virtual Environments
Munyan, Benson G.; Neer, Sandra M.; Beidel, Deborah C.; Jentsch, Florian
2016-01-01
Background Exposure therapy (EXP) is the most empirically supported treatment for anxiety and trauma-related disorders. EXP consists of repeated exposure to a feared object or situation in the absence of the feared outcome in order to extinguish associated anxiety. Key to the success of EXP is the need to present the feared object/event/situation in as much detail and utilizing as many sensory modalities as possible, in order to augment the sense of presence during exposure sessions. Various technologies used to augment the exposure therapy process by presenting multi-sensory cues (e.g., sights, smells, sounds). Studies have shown that scents can elicit emotionally charged memories, but no prior research has examined the effect of olfactory stimuli upon the patient’s sense of presence during simulated exposure tasks. Methods 60 adult participants navigated a mildly anxiety-producing virtual environment (VE) similar to those used in the treatment of anxiety disorders. Participants had no autobiographical memory associated with the VE. State anxiety, Presence ratings, and electrodermal (EDA) activity were collected throughout the experiment. Results Utilizing a Bonferroni corrected Linear Mixed Model, our results showed statistically significant relationships between olfactory stimuli and presence as assessed by both the Igroup Presence Questionnaire (IPQ: R2 = 0.85, (F(3,52) = 6.625, p = 0.0007) and a single item visual-analogue scale (R2 = 0.85, (F(3,52) = 5.382, p = 0.0027). State anxiety was unaffected by the presence or absence of olfactory cues. EDA was unaffected by experimental condition. Conclusion Olfactory stimuli increase presence in virtual environments that approximate those typical in exposure therapy, but did not increase EDA. Additionally, once administered, the removal of scents resulted in a disproportionate decrease in presence. Implications for incorporating the use of scents to increase the efficacy of exposure therapy is discussed. PMID:27310253
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?
Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming
2017-01-01
A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males' mental workloads were significantly higher than females'. For males, high-value products' mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio-visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio-visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference.
Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?
Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming
2017-01-01
A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males’ mental workloads were significantly higher than females’. For males, high-value products’ mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio–visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio–visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference. PMID:28184207
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Engelhardt, Sandy; Kolb, Silvio; De Simone, Raffaele; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo
2016-03-01
Mitral valve annuloplasty describes a surgical procedure where an artificial prosthesis is sutured onto the anatomical structure of the mitral annulus to re-establish the valve's functionality. Choosing an appropriate commercially available ring size and shape is a difficult decision the surgeon has to make intraoperatively according to his experience. In our augmented-reality framework, digitalized ring models are superimposed onto endoscopic image streams without using any additional hardware. To place the ring model on the proper position within the endoscopic image plane, a pose estimation is performed that depends on the localization of sutures placed by the surgeon around the leaflet origins and punctured through the stiffer structure of the annulus. In this work, the tissue penetration points are tracked by the real-time capable Lucas Kanade optical flow algorithm. The accuracy and robustness of this tracking algorithm is investigated with respect to the question whether outliers influence the subsequent pose estimation. Our results suggest that optical flow is very stable for a variety of different endoscopic scenes and tracking errors do not affect the position of the superimposed virtual objects in the scene, making this approach a viable candidate for annuloplasty augmented reality-enhanced decision support.
Measuring the Usability of Augmented Reality e-Learning Systems: A User-Centered Evaluation Approach
NASA Astrophysics Data System (ADS)
Pribeanu, Costin; Balog, Alexandru; Iordache, Dragoş Daniel
The development of Augmented Reality (AR) systems is creating new challenges and opportunities for the designers of e-learning systems. The mix of real and virtual requires appropriate interaction techniques that have to be evaluated with users in order to avoid usability problems. Formative usability aims at finding usability problems as early as possible in the development life cycle and is suitable to support the development of such novel interactive systems. This work presents an approach to the user-centered usability evaluation of an e-learning scenario for Biology developed on an Augmented Reality educational platform. The evaluation has been carried on during and after a summer school held within the ARiSE research project. The basic idea was to perform usability evaluation twice. In this respect, we conducted user testing with a small number of students during the summer school in order to get a fast feedback from users having good knowledge in Biology. Then, we repeated the user testing in different conditions and with a relatively larger number of representative users. In this paper we describe both experiments and compare the usability evaluation results.
Alaraj, Ali; Charbel, Fady T; Birk, Daniel; Tobin, Matthew; Tobin, Mathew; Luciano, Cristian; Banerjee, Pat P; Rizzi, Silvio; Sorenson, Jeff; Foley, Kevin; Slavin, Konstantin; Roitberg, Ben
2013-01-01
Recent studies have shown that mental script-based rehearsal and simulation-based training improve the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, as a result of the reduction of work hours and current trends focusing on patient safety and linking reimbursement with clinical outcomes. Thus, there is a need for adjunctive means for neurosurgical training, which is a recent advancement in simulation technology. ImmersiveTouch is an augmented reality system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform uses multiple sensory modalities, re-creating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, and simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with the development of such augmented reality neurosurgical modules and the feedback from neurosurgical residents.
Augmented Virtual Reality Laboratory
NASA Technical Reports Server (NTRS)
Tully-Hanson, Benjamin
2015-01-01
Real time motion tracking hardware has for the most part been cost prohibitive for research to regularly take place until recently. With the release of the Microsoft Kinect in November 2010, researchers now have access to a device that for a few hundred dollars is capable of providing redgreenblue (RGB), depth, and skeleton data. It is also capable of tracking multiple people in real time. For its original intended purposes, i.e. gaming, being used with the Xbox 360 and eventually Xbox One, it performs quite well. However, researchers soon found that although the sensor is versatile, it has limitations in real world applications. I was brought aboard this summer by William Little in the Augmented Virtual Reality (AVR) Lab at Kennedy Space Center to find solutions to these limitations.
Habanapp: Havana's Architectural Heritage a Click Away
NASA Astrophysics Data System (ADS)
Morganti, C.; Bartolomei, C.
2018-05-01
The research treats the application of technologies related with augmented and virtual reality to architectural and historical context in the city of Havana, Cuba, on the basis of historical studies and Range-Imaging techniques on buildings bordering old city's five main squares. The specific aim is to transfer all of the data received thanks to the most recent mobiles apps about Augmented Reality (AR) and Virtual reality (VR), in order to give birth to an innovative App never seen before in Cuba. The "Oficina del Historiador de la ciudad de La Habana", institution supervising architectural and cultural asset in Cuba, is widely interested in the topic in order to develop a new educational, cultural and artistic tool to be used both online and offline.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
CSI, optimal control, and accelerometers: Trials and tribulations
NASA Technical Reports Server (NTRS)
Benjamin, Brian J.; Sesak, John R.
1994-01-01
New results concerning optimal design with accelerometers are presented. These results show that the designer must be concerned with the stability properties of two Linear Quadratic Gaussian (LQG) compensators, one of which does not explicitly appear in the closed-loop system dynamics. The new concepts of virtual and implemented compensators are introduced to cope with these subtleties. The virtual compensator appears in the closed-loop system dynamics and the implemented compensator appears in control electronics. The stability of one compensator does not guarantee the stability of the other. For strongly stable (robust) systems, both compensators should be stable. The presence of controlled and uncontrolled modes in the system results in two additional forms of the compensator with corresponding terms that are of like form, but opposite sign, making simultaneous stabilization of both the virtual and implemented compensator difficult. A new design algorithm termed sensor augmentation is developed that aids stabilization of these compensator forms by incorporating a static augmentation term associated with the uncontrolled modes in the design process.
Augmented reality visualization of deformable tubular structures for surgical simulation.
Ferrari, Vincenzo; Viglialoro, Rosanna Maria; Nicoli, Paola; Cutolo, Fabrizio; Condino, Sara; Carbone, Marina; Siesto, Mentore; Ferrari, Mauro
2016-06-01
Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Realistic Reflections for Marine Environments in Augmented Reality Training Systems
2009-09-01
Static Backgrounds. Top: Agua Background. Bottom: Blue Background.............48 Figure 27. Ship Textures Used to Generate Reflections. In Order from...Like virtual simulations, augmented reality trainers can be configured to meet specific training needs and can be restarted and reused to train...Wave Distortion, Blurring and Shadow Many of the same methods outlined in Full Reflection shader were reused for the Physics shader. The same
Using Augmented Reality and Virtual Environments in Historic Places to Scaffold Historical Empathy
ERIC Educational Resources Information Center
Sweeney, Sara K.; Newbill, Phyllis; Ogle, Todd; Terry, Krista
2018-01-01
The authors explore how 3D visualizations of historical sites can be used as pedagogical tools to support historical empathy. They provide three visualizations created by a team at Virginia Tech as examples. They discuss virtual environments and how the digital restoration process is applied. They also define historical empathy, explain why it is…
Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei
2016-01-01
Objective To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. Materials and Methods In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. Results The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. Conclusion The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon’s skills and knowledge, not as a substitute. PMID:26757365
Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei
2016-01-01
To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon's skills and knowledge, not as a substitute.
Tomson, Tanja; Zary, Nabil
2014-01-01
Introduction. Antimicrobial resistance is a global health issue. Studies have shown that improved antibiotic prescription education among healthcare professionals reduces mistakes during the antibiotic prescription process. The aim of this study was to investigate novel educational approaches that through the use of Augmented Reality technology could make use of the real physical context and thereby enrich the educational process of antibiotics prescription. The objective is to investigate which type of information related to antibiotics could be used in an augmented reality application for antibiotics education. Methods. This study followed the Design-Based Research Methodology composed of the following main steps: problem analysis, investigation of information that should be visualized for the training session, and finally the involvement of the end users the development and evaluation processes of the prototype. Results. Two of the most important aspects in the antibiotic prescription process, to represent in an augmented reality application, are the antibiotic guidelines and the side effects. Moreover, this study showed how this information could be visualized from a mobile device using an Augmented Reality scanner and antibiotic drug boxes as markers. Discussion. In this study we investigated the usage of objects from a real physical context such as drug boxes and how they could be used as educational resources. The logical next steps are to examine how this approach of combining physical and virtual contexts through Augmented Reality applications could contribute to the improvement of competencies among healthcare professionals and its impact on the decrease of antibiotics resistance. PMID:25548733
Nifakos, Sokratis; Tomson, Tanja; Zary, Nabil
2014-01-01
Introduction. Antimicrobial resistance is a global health issue. Studies have shown that improved antibiotic prescription education among healthcare professionals reduces mistakes during the antibiotic prescription process. The aim of this study was to investigate novel educational approaches that through the use of Augmented Reality technology could make use of the real physical context and thereby enrich the educational process of antibiotics prescription. The objective is to investigate which type of information related to antibiotics could be used in an augmented reality application for antibiotics education. Methods. This study followed the Design-Based Research Methodology composed of the following main steps: problem analysis, investigation of information that should be visualized for the training session, and finally the involvement of the end users the development and evaluation processes of the prototype. Results. Two of the most important aspects in the antibiotic prescription process, to represent in an augmented reality application, are the antibiotic guidelines and the side effects. Moreover, this study showed how this information could be visualized from a mobile device using an Augmented Reality scanner and antibiotic drug boxes as markers. Discussion. In this study we investigated the usage of objects from a real physical context such as drug boxes and how they could be used as educational resources. The logical next steps are to examine how this approach of combining physical and virtual contexts through Augmented Reality applications could contribute to the improvement of competencies among healthcare professionals and its impact on the decrease of antibiotics resistance.
Do pain-associated contexts increase pain sensitivity? An investigation using virtual reality.
Harvie, Daniel S; Sterling, Michele; Smith, Ashley D
2018-04-30
Pain is not a linear result of nociception, but is dependent on multisensory inputs, psychological factors, and prior experience. Since nociceptive models appear insufficient to explain chronic pain, understanding non-nociceptive contributors is imperative. Several recent models propose that cues associatively linked to painful events might acquire the capacity to augment, or even cause, pain. This experiment aimed to determine whether contexts associated with pain, could modulate mechanical pain thresholds and pain intensity. Forty-eight healthy participants underwent a contextual conditioning procedure, where three neutral virtual reality contexts were paired with either unpredictable noxious stimulation, unpredictable vibrotactile stimulation, or no stimulation. Following the conditioning procedure, mechanical pain thresholds and pain evoked by a test stimulus were examined in each context. In the test phase, the effect of expectancy was equalised across conditions by informing participants when thresholds and painful stimuli would be presented. Contrary to our hypothesis, scenes that were associated with noxious stimulation did not increase mechanical sensitivity (p=0.08), or increase pain intensity (p=0.46). However, an interaction with sex highlighted the possibility that pain-associated contexts may alter pain sensitivity in females but not males (p=0.03). Overall, our data does not support the idea that pain-associated contexts can alter pain sensitivity in healthy asymptomatic individuals. That an effect was shown in females highlights the possibility that some subgroups may be susceptible to such an effect, although the magnitude of the effect may lack real-world significance. If pain-associated cues prove to have a relevant pain augmenting effect, in some subgroups, procedures aimed at extinguishing pain-related associations may have therapeutic potential.
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
Applying Augmented Reality in practical classes for engineering students
NASA Astrophysics Data System (ADS)
Bazarov, S. E.; Kholodilin, I. Yu; Nesterov, A. S.; Sokhina, A. V.
2017-10-01
In this article the Augmented Reality application for teaching engineering students of electrical and technological specialties is introduced. In order to increase the motivation for learning and the independence of students, new practical guidelines on Augmented Reality were developed in the application to practical classes. During the application development, the authors used software such as Unity 3D and Vuforia. The Augmented Reality content consists of 3D-models, images and animations, which are superimposed on real objects, helping students to study specific tasks. A user who has a smartphone, a tablet PC, or Augmented Reality glasses can visualize on-screen virtual objects added to a real environment. Having analyzed the current situation in higher education: the learner’s interest in studying, their satisfaction with the educational process, and the impact of the Augmented Reality application on students, a questionnaire was developed and offered to students; the study involved 24 learners.
1997-11-15
The Isothermal Dendritic Growth Experiment (IDGE), flown on three Space Shuttle missions, is yielding new insights into virtually all industrially relevant metal and alloy forming operations. IDGE used transparent organic liquids that form dendrites (treelike structures) similar to those inside metal alloys. Comparing Earth-based and space-based dendrite growth velocity, tip size and shape provides a better understanding of the fundamentals of dentritic growth, including gravity's effects. Shalowgraphic images of pivalic acid (PVA) dendrites forming from the melt show the subtle but distinct effects of gravity-driven heat convection on dentritic growth. In orbit, the dendrite grows as its latent heat is liberated by heat conduction. This yields a blunt dendrite tip. On Earth, heat is carried away by both conduction and gravity-driven convection. This yields a sharper dendrite tip. In addition, under terrestrial conditions, the sidebranches growing in the direction of gravity are augmented as gravity helps carry heat out of the way of the growing sidebranches as opposed to microgravity conditions where no augmentation takes place. IDGE was developed by Rensselaer Polytechnic Institute and NASA/Glenn Research Center. Advanced follow-on experiments are being developed for flight on the International Space Station. Photo Credit: NASA/Glenn Research Center
Augmented reality on poster presentations, in the field and in the classroom
NASA Astrophysics Data System (ADS)
Hawemann, Friedrich; Kolawole, Folarin
2017-04-01
Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.
Augmented reality for anatomical education.
Thomas, Rhys Gethin; John, Nigel William; Delieu, John Michael
2010-03-01
The use of Virtual Environments has been widely reported as a method of teaching anatomy. Generally such environments only convey the shape of the anatomy to the student. We present the Bangor Augmented Reality Education Tool for Anatomy (BARETA), a system that combines Augmented Reality (AR) technology with models produced using Rapid Prototyping (RP) technology, to provide the student with stimulation for touch as well as sight. The principal aims of this work were to provide an interface more intuitive than a mouse and keyboard, and to evaluate such a system as a viable supplement to traditional cadaver based education.
Advanced helmet mounted display (AHMD)
NASA Astrophysics Data System (ADS)
Sisodia, Ashok; Bayer, Michael; Townley-Smith, Paul; Nash, Brian; Little, Jay; Cassarly, William; Gupta, Anurag
2007-04-01
Due to significantly increased U.S. military involvement in deterrent, observer, security, peacekeeping and combat roles around the world, the military expects significant future growth in the demand for deployable virtual reality trainers with networked simulation capability of the battle space visualization process. The use of HMD technology in simulated virtual environments has been initiated by the demand for more effective training tools. The AHMD overlays computer-generated data (symbology, synthetic imagery, enhanced imagery) augmented with actual and simulated visible environment. The AHMD can be used to support deployable reconfigurable training solutions as well as traditional simulation requirements, UAV augmented reality, air traffic control and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications. This paper will describe the design improvements implemented for production of the AHMD System.
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.
Wright, W Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Mirelman, Anat; Rochester, Lynn; Reelick, Miriam; Nieuwhof, Freek; Pelosin, Elisa; Abbruzzese, Giovanni; Dockx, Kim; Nieuwboer, Alice; Hausdorff, Jeffrey M
2013-02-06
Recent work has demonstrated that fall risk can be attributed to cognitive as well as motor deficits. Indeed, everyday walking in complex environments utilizes executive function, dual tasking, planning and scanning, all while walking forward. Pilot studies suggest that a multi-modal intervention that combines treadmill training to target motor function and a virtual reality obstacle course to address the cognitive components of fall risk may be used to successfully address the motor-cognitive interactions that are fundamental for fall risk reduction. The proposed randomized controlled trial will evaluate the effects of treadmill training augmented with virtual reality on fall risk. Three hundred older adults with a history of falls will be recruited to participate in this study. This will include older adults (n=100), patients with mild cognitive impairment (n=100), and patients with Parkinson's disease (n=100). These three sub-groups will be recruited in order to evaluate the effects of the intervention in people with a range of motor and cognitive deficits. Subjects will be randomly assigned to the intervention group (treadmill training with virtual reality) or to the active-control group (treadmill training without virtual reality). Each person will participate in a training program set in an outpatient setting 3 times per week for 6 weeks. Assessments will take place before, after, and 1 month and 6 months after the completion of the training. A falls calendar will be kept by each participant for 6 months after completing the training to assess fall incidence (i.e., the number of falls, multiple falls and falls rate). In addition, we will measure gait under usual and dual task conditions, balance, community mobility, health related quality of life, user satisfaction and cognitive function. This randomized controlled trial will demonstrate the extent to which an intervention that combines treadmill training augmented by virtual reality reduces fall risk, improves mobility and enhances cognitive function in a diverse group of older adults. In addition, the comparison to an active control group that undergoes treadmill training without virtual reality will provide evidence as to the added value of addressing motor cognitive interactions as an integrated unit. (NIH)-NCT01732653.
2013-01-01
Background Recent work has demonstrated that fall risk can be attributed to cognitive as well as motor deficits. Indeed, everyday walking in complex environments utilizes executive function, dual tasking, planning and scanning, all while walking forward. Pilot studies suggest that a multi-modal intervention that combines treadmill training to target motor function and a virtual reality obstacle course to address the cognitive components of fall risk may be used to successfully address the motor-cognitive interactions that are fundamental for fall risk reduction. The proposed randomized controlled trial will evaluate the effects of treadmill training augmented with virtual reality on fall risk. Methods/Design Three hundred older adults with a history of falls will be recruited to participate in this study. This will include older adults (n=100), patients with mild cognitive impairment (n=100), and patients with Parkinson’s disease (n=100). These three sub-groups will be recruited in order to evaluate the effects of the intervention in people with a range of motor and cognitive deficits. Subjects will be randomly assigned to the intervention group (treadmill training with virtual reality) or to the active-control group (treadmill training without virtual reality). Each person will participate in a training program set in an outpatient setting 3 times per week for 6 weeks. Assessments will take place before, after, and 1 month and 6 months after the completion of the training. A falls calendar will be kept by each participant for 6 months after completing the training to assess fall incidence (i.e., the number of falls, multiple falls and falls rate). In addition, we will measure gait under usual and dual task conditions, balance, community mobility, health related quality of life, user satisfaction and cognitive function. Discussion This randomized controlled trial will demonstrate the extent to which an intervention that combines treadmill training augmented by virtual reality reduces fall risk, improves mobility and enhances cognitive function in a diverse group of older adults. In addition, the comparison to an active control group that undergoes treadmill training without virtual reality will provide evidence as to the added value of addressing motor cognitive interactions as an integrated unit. Trial Registration (NIH)–NCT01732653 PMID:23388087
Tools virtualization for command and control systems
NASA Astrophysics Data System (ADS)
Piszczek, Marek; Maciejewski, Marcin; Pomianek, Mateusz; Szustakowski, Mieczysław
2017-10-01
Information management is an inseparable part of the command process. The result is that the person making decisions at the command post interacts with data providing devices in various ways. Tools virtualization process can introduce a number of significant modifications in the design of solutions for management and command. The general idea involves replacing physical devices user interface with their digital representation (so-called Virtual instruments). A more advanced level of the systems "digitalization" is to use the mixed reality environments. In solutions using Augmented reality (AR) customized HMI is displayed to the operator when he approaches to each device. Identification of device is done by image recognition of photo codes. Visualization is achieved by (optical) see-through head mounted display (HMD). Control can be done for example by means of a handheld touch panel. Using the immersive virtual environment, the command center can be digitally reconstructed. Workstation requires only VR system (HMD) and access to information network. Operator can interact with devices in such a way as it would perform in real world (for example with the virtual hands). Because of their procedures (an analysis of central vision, eye tracking) MR systems offers another useful feature of reducing requirements for system data throughput. Due to the fact that at the moment we focus on the single device. Experiments carried out using Moverio BT-200 and SteamVR systems and the results of experimental application testing clearly indicate the ability to create a fully functional information system with the use of mixed reality technology.
Fiia: A Model-Based Approach to Engineering Collaborative Augmented Reality
NASA Astrophysics Data System (ADS)
Wolfe, Christopher; Smith, J. David; Phillips, W. Greg; Graham, T. C. Nicholas
Augmented reality systems often involve collaboration among groups of people. While there are numerous toolkits that aid the development of such augmented reality groupware systems (e.g., ARToolkit and Groupkit), there remains an enormous gap between the specification of an AR groupware application and its implementation. In this chapter, we present Fiia, a toolkit which simplifies the development of collaborative AR applications. Developers specify the structure of their applications using the Fiia modeling language, which abstracts details of networking and provides high-level support for specifying adapters between the physical and virtual world. The Fiia.Net runtime system then maps this conceptual model to a runtime implementation. We illustrate Fiia via Raptor, an augmented reality application used to help small groups collaboratively prototype video games.
A methodology to emulate and evaluate a productive virtual workstation
NASA Technical Reports Server (NTRS)
Krubsack, David; Haberman, David
1992-01-01
The Advanced Display and Computer Augmented Control (ADCACS) Program at ACT is sponsored by NASA Ames to investigate the broad field of technologies which must be combined to design a 'virtual' workstation for the Space Station Freedom. This program is progressing in several areas and resulted in the definition of requirements for a workstation. A unique combination of technologies at the ACT Laboratory have been networked to effectively create an experimental environment. This experimental environment allows the integration of nonconventional input devices with a high power graphics engine within the framework of an expert system shell which coordinates the heterogeneous inputs with the 'virtual' presentation. The flexibility of the workstation is evolved as experiments are designed and conducted to evaluate the condition descriptions and rule sets of the expert system shell and its effectiveness in driving the graphics engine. Workstation productivity has been defined by the achievable performance in the emulator of the calibrated 'sensitivity' of input devices, the graphics presentation, the possible optical enhancements to achieve a wide field of view color image and the flexibility of conditional descriptions in the expert system shell in adapting to prototype problems.
Virtual reality and the unfolding of higher dimensions
NASA Astrophysics Data System (ADS)
Aguilera, Julieta C.
2006-02-01
As virtual/augmented reality evolves, the need for spaces that are responsive to structures independent from three dimensional spatial constraints, become apparent. The visual medium of computer graphics may also challenge these self imposed constraints. If one can get used to how projections affect 3D objects in two dimensions, it may also be possible to compose a situation in which to get used to the variations that occur while moving through higher dimensions. The presented application is an enveloping landscape of concave and convex forms, which are determined by the orientation and displacement of the user in relation to a grid made of tesseracts (cubes in four dimensions). The interface accepts input from tridimensional and four-dimensional transformations, and smoothly displays such interactions in real-time. The motion of the user becomes the graphic element whereas the higher dimensional grid references to his/her position relative to it. The user learns how motion inputs affect the grid, recognizing a correlation between the input and the transformations. Mapping information to complex grids in virtual reality is valuable for engineers, artists and users in general because navigation can be internalized like a dance pattern, and further engage us to maneuver space in order to know and experience.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A virtual tour of geological heritage: Valourising geodiversity using Google Earth and QR code
NASA Astrophysics Data System (ADS)
Martínez-Graña, A. M.; Goy, J. L.; Cimarra, C. A.
2013-12-01
When making land-use plans, it is necessary to inventory and catalogue the geological heritage and geodiversity of a site to establish an apolitical conservation protection plan to meet the educational and social needs of society. New technologies make it possible to create virtual databases using virtual globes - e.g., Google Earth - and other personal-use geomatics applications (smartphones, tablets, PDAs) for accessing geological heritage information in “real time” for scientific, educational, and cultural purposes via a virtual geological itinerary. Seventeen mapped and georeferenced geosites have been created in Keyhole Markup Language for use in map layers used in geological itinerary stops for different applications. A virtual tour has been developed for Las Quilamas Natural Park, which is located in the Spanish Central System, using geological layers and topographic and digital terrain models that can be overlaid in a 3D model. The Google Earth application was used to import the geosite placemarks. For each geosite, a tab has been developed that shows a description of the geology with photographs and diagrams and that evaluates the scientific, educational, and tourism quality. Augmented reality allows the user to access these georeferenced thematic layers and overlay data, images, and graphics in real time on their mobile devices. These virtual tours can be incorporated into subject guides designed by public. Seven educational and interpretive panels describing some of the geosites were designed and tagged with a QR code that could be printed at each stop or in the printed itinerary. These QR codes can be scanned with the camera found on most mobile devices, and video virtual tours can be viewed on these devices. The virtual tour of the geological heritage can be used to show tourists the geological history of the Las Quilamas Natural Park using new geomatics technologies (virtual globes, augmented reality, and QR codes).
Hoermann, Simon; Ferreira Dos Santos, Luara; Morkisch, Nadine; Jettkowski, Katrin; Sillis, Moran; Devan, Hemakumar; Kanagasabai, Parimala S; Schmidt, Henning; Krüger, Jörg; Dohle, Christian; Regenbrecht, Holger; Hale, Leigh; Cutfield, Nicholas J
2017-07-01
New rehabilitation strategies for post-stroke upper limb rehabilitation employing visual stimulation show promising results, however, cost-efficient and clinically feasible ways to provide these interventions are still lacking. An integral step is to translate recent technological advances, such as in virtual and augmented reality, into therapeutic practice to improve outcomes for patients. This requires research on the adaptation of the technology for clinical use as well as on the appropriate guidelines and protocols for sustainable integration into therapeutic routines. Here, we present and evaluate a novel and affordable augmented reality system (Augmented Reflection Technology, ART) in combination with a validated mirror therapy protocol for upper limb rehabilitation after stroke. We evaluated components of the therapeutic intervention, from the patients' and the therapists' points of view in a clinical feasibility study at a rehabilitation centre. We also assessed the integration of ART as an adjunct therapy for the clinical rehabilitation of subacute patients at two different hospitals. The results showed that the combination and application of the Berlin Protocol for Mirror Therapy together with ART was feasible for clinical use. This combination was integrated into the therapeutic plan of subacute stroke patients at the two clinical locations where the second part of this research was conducted. Our findings pave the way for using technology to provide mirror therapy in clinical settings and show potential for the more effective use of inpatient time and enhanced recoveries for patients. Implications for Rehabilitation Computerised Mirror Therapy is feasible for clinical use Augmented Reflection Technology can be integrated as an adjunctive therapeutic intervention for subacute stroke patients in an inpatient setting Virtual Rehabilitation devices such as Augmented Reflection Technology have considerable potential to enhance stroke rehabilitation.
Simulation-based evaluation of an in-vehicle smart situation awareness enhancement system.
Gregoriades, Andreas; Sutcliffe, Alistair
2018-07-01
Situation awareness (SA) constitutes a critical factor in road safety, strongly related to accidents. This paper describes the evaluation of a proposed SA enhancement system (SAES) that exploits augmented reality through a head-up display (HUD). Two SAES designs were evaluation (information rich vs. minimal information) using a custom-made simulator and the Situation Awareness Global Assessment Technique with performance and EEG measures. The paper describes the process of assessing the SA of drivers using the SAES, through a series of experiments with participants in a Cave Automatic Virtual Environment. The effectiveness of the SAES was tested in a within-group research design. The results showed that the information rich (radar-style display) was superior to the minimal (arrow hazard indicator) design and that both SAES improved drivers' SA and performance compared to the control (no HUD) design. Practitioner Summary: Even though driver situation awareness is considered as one of the leading causes of road accidents, little has been done to enhance it. The current study demonstrates the positive effect of a proposed situation awareness enhancement system on driver situation awareness, through an experiment using virtual prototyping in a simulator.
The development, assessment and validation of virtual reality for human anatomy instruction
NASA Technical Reports Server (NTRS)
Marshall, Karen Benn
1996-01-01
This research project seeks to meet the objective of science training by developing, assessing, validating and utilizing VR as a human anatomy training medium. Current anatomy instruction is primarily in the form of lectures and usage of textbooks. In ideal situations, anatomic models, computer-based instruction, and cadaver dissection are utilized to augment traditional methods of instruction. At many institutions, lack of financial resources limits anatomy instruction to textbooks and lectures. However, human anatomy is three-dimensional, unlike the one-dimensional depiction found in textbooks and the two-dimensional depiction found on the computer. Virtual reality allows one to step through the computer screen into a 3-D artificial world. The primary objective of this project is to produce a virtual reality application of the abdominopelvic region of a human cadaver that can be taken back to the classroom. The hypothesis is that an immersive learning environment affords quicker anatomic recognition and orientation and a greater level of retention in human anatomy instruction. The goal is to augment not replace traditional modes of instruction.
An augmented reality system validation for the treatment of cockroach phobia.
Bretón-López, Juani; Quero, Soledad; Botella, Cristina; García-Palacios, Azucena; Baños, Rosa Maria; Alcañiz, Mariano
2010-12-01
Augmented reality (AR) is a new technology in which various virtual elements are incorporated into the user's perception of the real world. The most significant aspect of AR is that the virtual elements add relevant and helpful information to the real scene. AR shares some important characteristics with virtual reality as applied in clinical psychology. However, AR offers additional features that might be crucial for treating certain problems. An AR system designed to treat insect phobia has been used for treating phobia of small animals, and positive preliminary data about the global efficacy of the system have been obtained. However, it is necessary to determine the capacity of similar AR systems and their elements that are designed to evoke anxiety in participants; this is achieved by testing the correspondence between the inclusion of feared stimuli and the induction of anxiety. The objective of the present work is to validate whether the stimuli included in the AR-Insect Phobia system are capable of inducing anxiety in six participants diagnosed with cockroach phobia. Results support the adequacy of each element of the system in inducing anxiety in all participants.
Interactive Anatomy-Augmented Virtual Simulation Training.
Aebersold, Michelle; Voepel-Lewis, Terri; Cherara, Leila; Weber, Monica; Khouri, Christina; Levine, Robert; Tait, Alan R
2018-02-01
Traditionally, clinical psychomotor skills are taught through videos and demonstration by faculty which does not allow for the visualization of internal structures and anatomical landmarks that would enhance the learner skill performance. Sophomore and junior nursing students attending a large Midwestern Institution (N=69) participated in this mixed methods study. Students demonstrated their ability to place a nasogastric tube (NGT) after being randomly assigned to usual training (Control group) or an iPad anatomy-augmented virtual simulation training module (AR group). The ability of the participants to demonstrate competence in placing the NGT was assessed using a 17-item competency checklist. After the demonstration, students completed a survey to elicit information about students' level of training, prior experience with NGT placement, satisfaction with the AR technology, and perceptions of AR as a potential teaching tool for clinical skills training. The ability to correctly place the NGT through all the checklist items was statistically significant in the AR group compared with the control group (P = 0.011). Eighty-six percent of participants in the AR group rated AR as superior/far superior to other procedural training programs to which they had been exposed, whereas, only 5.9% of participants in the control group rated the control program as superior/far superior (P < 0.001). Overall the AR module was better received compared with the control group with regards to realism, identifying landmarks, visualization of internal organs, ease of use, usefulness, and promoting learning and understanding.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
ProMIS augmented reality training of laparoscopic procedures face validity.
Botden, Sanne M B I; Buzink, Sonja N; Schijven, Marlies P; Jakimowicz, Jack J
2008-01-01
Conventional video trainers lack the ability to assess the trainee objectively, but offer modalities that are often missing in virtual reality simulation, such as realistic haptic feedback. The ProMIS augmented reality laparoscopic simulator retains the benefit of a traditional box trainer, by using original laparoscopic instruments and tactile tasks, but additionally generates objective measures of performance. Fifty-five participants performed a "basic skills" and "suturing and knot-tying" task on ProMIS, after which they filled out a questionnaire regarding realism, haptics, and didactic value of the simulator, on a 5-point-Likert scale. The participants were allotted to 2 experience groups: "experienced" (>50 procedures and >5 sutures; N = 27), and "moderately experienced" (<50 procedures and <5 sutures; N = 28). General consensus among all participants, particularly the experienced, was that ProMIS is a useful tool for training (mean: 4.67, SD: 0.48). It was considered very realistic (mean: 4.44, SD: 0.66), with good haptics (mean: 4.10, SD: 0.97) and didactic value (mean 4.10, SD: 0.65). This study established the face validity of the ProMIS augmented reality simulator for "basic skills" and "suturing and knot-tying" tasks. ProMIS was considered a good tool for training in laparoscopic skills for surgical residents and surgeons.
Immersive Collaboration Simulations: Multi-User Virtual Environments and Augmented Realities
NASA Technical Reports Server (NTRS)
Dede, Chris
2008-01-01
Emerging information technologies are reshaping the following: shifts in the knowledge and skills society values, development of new methods of teaching and learning, and changes in the characteristics of learning.
Khademi, Maryam; Hondori, Hossein Mousavi; Dodakian, Lucy; Cramer, Steve; Lopes, Cristina V
2013-01-01
Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
NASA Astrophysics Data System (ADS)
Castagnetti, C.; Giannini, M.; Rivola, R.
2017-05-01
The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.
NASA Astrophysics Data System (ADS)
Angeletaki, A.; Carrozzino, M.; Johansen, S.
2013-07-01
In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Virtual rehabilitation--benefits and challenges.
Burdea, G C
2003-01-01
To discuss the advantages and disadvantages of rehabilitation applications of virtual reality. VR can be used as an enhancement to conventional therapy for patients with conditions ranging from musculoskeletal problems, to stroke-induced paralysis, to cognitive deficits. This approach is called "VR-augmented rehabilitation." Alternately, VR can replace conventional interventions altogether, in which case the rehabilitation is "VR-based." If the intervention is done at a distance, then it is called "telerehabilitation." Simulation exercises for post-stroke patients have been developed using a "teacher object" approach or a video game approach. Simulations for musculo-skeletal patients use virtual replicas of rehabilitation devices (such as rubber ball, power putty, peg board). Phobia-inducing virtual environments are prescribed for patients with cognitive deficits. VR-augmented rehabilitation has been shown effective for stroke patients in the chronic phase of the disease. VR-based rehabilitation has been improving patients with fear of flying, Vietnam syndrome, fear of heights, and chronic stroke patients. Telerehabilitation interventions using VR have improved musculo-skeletal and post-stroke patients, however less data is available at this time. Virtual reality presents significant advantages when applied to rehabilitation of patients with varied conditions. These advantages include patient motivation, adaptability and variability based on patient baseline, transparent data storage, online remote data access, economy of scale, reduced medical costs. Challenges in VR use for rehabilitation relate to lack of computer skills on the part of therapists, lack of support infrastructure, expensive equipment (initially), inadequate communication infrastructure (for telerehabilitation in rural areas), and patient safety concerns.
Latency and User Performance in Virtual Environments and Augmented Reality
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2009-01-01
System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.
Feasibility of virtual reality augmented cycling for health promotion of people poststroke.
Deutsch, Judith E; Myslinski, Mary Jane; Kafri, Michal; Ranky, Richard; Sivak, Mark; Mavroidis, Constantinos; Lewis, Jeffrey A
2013-09-01
A virtual reality (VR) augmented cycling kit (VRACK) was developed to address motor control and fitness deficits of individuals with chronic stroke. In this article, we report on the safety, feasibility, and efficacy of using the VR augmented cycling kit to improve cardiorespiratory (CR) fitness of individuals in the chronic phase poststroke. Four individuals with chronic stroke (47-65 years old and ≥3 years poststroke), with residual lower extremity impairments (Fugl-Meyer 24-26/34), who were limited community ambulators (gait speed range 0.56-1.1 m/s) participated in this study. Safety was defined as the absence of adverse events. Feasibility was measured using attendance, total exercise time, and "involvement" measured with the presence questionnaire (PQ). Efficacy of CR fitness was evaluated using a submaximal bicycle ergometer test before and after an 8-week training program. The intervention was safe and feasible with participants having 1 adverse event, 100% adherence, achieving between 90 and 125 minutes of cycling each week, and a mean PQ score of 39 (SD 3.3). There was a statistically significant (13%; P = 0.035) improvement in peak VO(2), with a range of 6% to 24.5%. For these individuals, poststroke, VR augmented cycling, using their heart rate to set their avatar's speed, fostered training of sufficient duration and intensity to promote CR fitness. In addition, there was a transfer of training from the bicycle to walking endurance. VR augmented cycling may be an addition to the therapist's tools for concurrent training of mobility and health promotion of individuals poststroke.
Wagner, A; Ploder, O; Enislidis, G; Truppe, M; Ewers, R
1996-04-01
Interventional video tomography (IVT), a new imaging modality, achieves virtual visualization of anatomic structures in three dimensions for intraoperative stereotactic navigation. Partial immersion into a virtual data space, which is orthotopically coregistered to the surgical field, enhances, by means of a see-through head-mounted display (HMD), the surgeon's visual perception and technique by providing visual access to nonvisual data of anatomy, physiology, and function. The presented cases document the potential of augmented reality environments in maxillofacial surgery.
Mixed Reality Technology at NASA JPL
2016-05-16
NASA's JPL is a center of innovation in virtual and augmented reality, producing groundbreaking applications of these technologies to support a variety of missions. This video is a collection of unedited scenes released to the media.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Virtual reality in the operating room of the future.
Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R
1997-01-01
In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.
Reducing the Schizophrenia Stigma: A New Approach Based on Augmented Reality
Silva, Rafael D. de C.; Albuquerque, Saulo G. C.; Muniz, Artur de V.; Filho, Pedro P. Rebouças; Ribeiro, Sidarta
2017-01-01
Schizophrenia is a chronic mental disease that usually manifests psychotic symptoms and affects an individual's functionality. The stigma related to this disease is a serious obstacle for an adequate approach to its treatment. Stigma can, for example, delay the start of treatment, and it creates difficulties in interpersonal and professional relationships. This work proposes a new tool based on augmented reality to reduce the stigma related to schizophrenia. The tool is capable of simulating the psychotic symptoms typical of schizophrenia and simulates sense perception changes in order to create an immersive experience capable of generating pathological experiences of a patient with schizophrenia. The integration into the proposed environment occurs through immersion glasses and an embedded camera. Audio and visual effects can also be applied in real time. To validate the proposed environment, medical students experienced the virtual environment and then answered three questionnaires to assess (i) stigmas related to schizophrenia, (ii) the efficiency and effectiveness of the tool, and, finally (iii) stigma after simulation. The analysis of the questionnaires showed that the proposed model is a robust tool and quite realistic and, thus, very promising in reducing stigma associated with schizophrenia by instilling in the observer a greater comprehension of any person during an schizophrenic outbreak, whether a patient or a family member. PMID:29317860
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds
Wright, W. Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724
Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin
2017-03-27
Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.
Virtual reality training in neurosurgery: Review of current status and future applications
Alaraj, Ali; Lemole, Michael G.; Finkle, Joshua H.; Yudkowsky, Rachel; Wallace, Adam; Luciano, Cristian; Banerjee, P. Pat; Rizzi, Silvio H.; Charbel, Fady T.
2011-01-01
Background: Over years, surgical training is changing and years of tradition are being challenged by legal and ethical concerns for patient safety, work hour restrictions, and the cost of operating room time. Surgical simulation and skill training offer an opportunity to teach and practice advanced techniques before attempting them on patients. Simulation training can be as straightforward as using real instruments and video equipment to manipulate simulated “tissue” in a box trainer. More advanced virtual reality (VR) simulators are now available and ready for widespread use. Early systems have demonstrated their effectiveness and discriminative ability. Newer systems enable the development of comprehensive curricula and full procedural simulations. Methods: A PubMed review of the literature was performed for the MESH words “Virtual reality, “Augmented Reality”, “Simulation”, “Training”, and “Neurosurgery”. Relevant articles were retrieved and reviewed. A review of the literature was performed for the history, current status of VR simulation in neurosurgery. Results: Surgical organizations are calling for methods to ensure the maintenance of skills, advance surgical training, and credential surgeons as technically competent. The number of published literature discussing the application of VR simulation in neurosurgery training has evolved over the last decade from data visualization, including stereoscopic evaluation to more complex augmented reality models. With the revolution of computational analysis abilities, fully immersive VR models are currently available in neurosurgery training. Ventriculostomy catheters insertion, endoscopic and endovascular simulations are used in neurosurgical residency training centers across the world. Recent studies have shown the coloration of proficiency with those simulators and levels of experience in the real world. Conclusion: Fully immersive technology is starting to be applied to the practice of neurosurgery. In the near future, detailed VR neurosurgical modules will evolve to be an essential part of the curriculum of the training of neurosurgeons. PMID:21697968
Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos
2013-12-01
Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.
Beam steering for virtual/augmented reality displays with a cycloidal diffractive waveplate.
Chen, Haiwei; Weng, Yishi; Xu, Daming; Tabiryan, Nelson V; Wu, Shin-Tson
2016-04-04
We proposed a switchable beam steering device with cycloidal diffractive waveplate (CDW) for eye tracking in a virtual reality (VR) or augmented reality (AR) display system. Such a CDW diffracts the incident circularly polarized light to the first order with over 95% efficiency. To convert the input linearly polarized light to right-handed or left-handed circular polarization, we developed a broadband polarization switch consisting of a twisted nematic liquid crystal cell and an achromatic quarter-wave retardation film. By cascading 2-3 CDWs together, multiple diffraction angles can be achieved. To suppress the color dispersion, we proposed two approaches to obtain the same diffraction angle for red, green, and blue LEDs-based full color displays. Our device exhibits several advantages, such as high diffraction efficiency, fast response time, low power consumption, and low cost. It holds promise for the emerging VR/AR displays.
Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M
2010-01-01
Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.
NASA Technical Reports Server (NTRS)
Schulte, Erin
2017-01-01
As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.
de Bruin, E D; Schoene, D; Pichierri, G; Smith, S T
2010-08-01
Virtual augmented exercise, an emerging technology that can help to promote physical activity and combine the strengths of indoor and outdoor exercise, has recently been proposed as having the potential to increase exercise behavior in older adults. By creating a strong presence in a virtual, interactive environment, distraction can be taken to greater levels while maintaining the benefits of indoor exercises which may result in a shift from negative to positive thoughts about exercise. Recent findings on young participants show that virtual reality training enhances mood, thus, increasing enjoyment and energy. For older adults virtual, interactive environments can influence postural control and fall events by stimulating the sensory cues that are responsible in maintaining balance and orientation. However, the potential of virtual reality training has yet to be explored for older adults. This manuscript describes the potential of dance pad training protocols in the elderly and reports on the theoretical rationale of combining physical game-like exercises with sensory and cognitive challenges in a virtual environment.
Visual Stability of Objects and Environments Viewed through Head-Mounted Displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Adelstein, Bernard D.
2015-01-01
Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.
Model-based registration of multi-rigid-body for augmented reality
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro
2009-02-01
Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.
Gibby, Jacob T; Swenson, Samuel A; Cvetko, Steve; Rao, Raj; Javan, Ramin
2018-06-22
Augmented reality has potential to enhance surgical navigation and visualization. We determined whether head-mounted display augmented reality (HMD-AR) with superimposed computed tomography (CT) data could allow the wearer to percutaneously guide pedicle screw placement in an opaque lumbar model with no real-time fluoroscopic guidance. CT imaging was obtained of a phantom composed of L1-L3 Sawbones vertebrae in opaque silicone. Preprocedural planning was performed by creating virtual trajectories of appropriate angle and depth for ideal approach into the pedicle, and these data were integrated into the Microsoft HoloLens using the Novarad OpenSight application allowing the user to view the virtual trajectory guides and CT images superimposed on the phantom in two and three dimensions. Spinal needles were inserted following the virtual trajectories to the point of contact with bone. Repeat CT revealed actual needle trajectory, allowing comparison with the ideal preprocedural paths. Registration of AR to phantom showed a roughly circular deviation with maximum average radius of 2.5 mm. Users took an average of 200 s to place a needle. Extrapolation of needle trajectory into the pedicle showed that of 36 needles placed, 35 (97%) would have remained within the pedicles. Needles placed approximated a mean distance of 4.69 mm in the mediolateral direction and 4.48 mm in the craniocaudal direction from pedicle bone edge. To our knowledge, this is the first peer-reviewed report and evaluation of HMD-AR with superimposed 3D guidance utilizing CT for spinal pedicle guide placement for the purpose of cannulation without the use of fluoroscopy.
An Augmented-Reality Edge Enhancement Application for Google Glass
Hwang, Alex D.; Peli, Eli
2014-01-01
Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871
Villiger, Michael; Bohli, Dominik; Kiper, Daniel; Pyk, Pawel; Spillmann, Jeremy; Meilick, Bruno; Curt, Armin; Hepp-Reymond, Marie-Claude; Hotz-Boendermaker, Sabina; Eng, Kynan
2013-10-01
Neurorehabilitation interventions to improve lower limb function and neuropathic pain have had limited success in people with chronic, incomplete spinal cord injury (iSCI). We hypothesized that intense virtual reality (VR)-augmented training of observed and executed leg movements would improve limb function and neuropathic pain. Patients used a VR system with a first-person view of virtual lower limbs, controlled via movement sensors fitted to the patient's own shoes. Four tasks were used to deliver intensive training of individual muscles (tibialis anterior, quadriceps, leg ad-/abductors). The tasks engaged motivation through feedback of task success. Fourteen chronic iSCI patients were treated over 4 weeks in 16 to 20 sessions of 45 minutes. Outcome measures were 10 Meter Walking Test, Berg Balance Scale, Lower Extremity Motor Score, Spinal Cord Independence Measure, Locomotion and Neuropathic Pain Scale (NPS), obtained at the start and at 4 to 6 weeks before intervention. In addition to positive changes reported by the patients (Patients' Global Impression of Change), measures of walking capacity, balance, and strength revealed improvements in lower limb function. Intensity and unpleasantness of neuropathic pain in half of the affected participants were reduced on the NPS test. Overall findings remained stable 12 to 16 weeks after termination of the training. In a pretest/posttest, uncontrolled design, VR-augmented training was associated with improvements in motor function and neuropathic pain in persons with chronic iSCI, several of which reached the level of a minimal clinically important change. A controlled trial is needed to compare this intervention to active training alone or in combination.
Meldrum, Dara; Herdman, Susan; Moloney, Roisin; Murray, Deirdre; Duffy, Douglas; Malone, Kareena; French, Helen; Hone, Stephen; Conroy, Ronan; McConn-Walsh, Rory
2012-03-26
Unilateral peripheral vestibular loss results in gait and balance impairment, dizziness and oscillopsia. Vestibular rehabilitation benefits patients but optimal treatment remains unknown. Virtual reality is an emerging tool in rehabilitation and provides opportunities to improve both outcomes and patient satisfaction with treatment. The Nintendo Wii Fit Plus® (NWFP) is a low cost virtual reality system that challenges balance and provides visual and auditory feedback. It may augment the motor learning that is required to improve balance and gait, but no trials to date have investigated efficacy. In a single (assessor) blind, two centre randomised controlled superiority trial, 80 patients with unilateral peripheral vestibular loss will be randomised to either conventional or virtual reality based (NWFP) vestibular rehabilitation for 6 weeks. The primary outcome measure is gait speed (measured with three dimensional gait analysis). Secondary outcomes include computerised posturography, dynamic visual acuity, and validated questionnaires on dizziness, confidence and anxiety/depression. Outcome will be assessed post treatment (8 weeks) and at 6 months. Advances in the gaming industry have allowed mass production of highly sophisticated low cost virtual reality systems that incorporate technology previously not accessible to most therapists and patients. Importantly, they are not confined to rehabilitation departments, can be used at home and provide an accurate record of adherence to exercise. The benefits of providing augmented feedback, increasing intensity of exercise and accurately measuring adherence may improve conventional vestibular rehabilitation but efficacy must first be demonstrated. Clinical trials.gov identifier: NCT01442623.
Yudkowsky, Rachel; Luciano, Cristian; Banerjee, Pat; Schwartz, Alan; Alaraj, Ali; Lemole, G Michael; Charbel, Fady; Smith, Kelly; Rizzi, Silvio; Byrne, Richard; Bendok, Bernard; Frim, David
2013-02-01
Ventriculostomy is a neurosurgical procedure for providing therapeutic cerebrospinal fluid drainage. Complications may arise during repeated attempts at placing the catheter in the ventricle. We studied the impact of simulation-based practice with a library of virtual brains on neurosurgery residents' performance in simulated and live surgical ventriculostomies. Using computed tomographic scans of actual patients, we developed a library of 15 virtual brains for the ImmersiveTouch system, a head- and hand-tracked augmented reality and haptic simulator. The virtual brains represent a range of anatomies including normal, shifted, and compressed ventricles. Neurosurgery residents participated in individual simulator practice on the library of brains including visualizing the 3-dimensional location of the catheter within the brain immediately after each insertion. Performance of participants on novel brains in the simulator and during actual surgery before and after intervention was analyzed using generalized linear mixed models. Simulator cannulation success rates increased after intervention, and live procedure outcomes showed improvement in the rate of successful cannulation on the first pass. However, the incidence of deeper, contralateral (simulator) and third-ventricle (live) placements increased after intervention. Residents reported that simulations were realistic and helpful in improving procedural skills such as aiming the probe, sensing the pressure change when entering the ventricle, and estimating how far the catheter should be advanced within the ventricle. Simulator practice with a library of virtual brains representing a range of anatomies and difficulty levels may improve performance, potentially decreasing complications due to inexpert technique.
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
Real-time 3D image reconstruction guidance in liver resection surgery.
Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-04-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.
Medical telementoring using an augmented reality transparent display.
Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan P
2016-06-01
The goal of this study was to design and implement a novel surgical telementoring system called the System for Telementoring with Augmented Reality (STAR) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared with a conventional system based on a telestrator for surgical instruction. A telementoring system was developed and evaluated in a study which used a 1 × 2 between-subjects design with telementoring system, that is, STAR or conventional, as the independent variable. The participants in the study were 20 premedical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion. When compared with the conventional system, participants using STAR completed the 2 tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task). Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeon's field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
Vogt, Tobias; Herpers, Rainer; Askew, Christopher D.; Scherfgen, David; Strüder, Heiko K.; Schneider, Stefan
2015-01-01
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305
Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan
2015-01-01
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
Experiential learning in soil science: Use of an augmented reality sandbox
NASA Astrophysics Data System (ADS)
Vaughan, Karen; Vaughan, Robert; Seeley, Janel; Brevik, Eric
2017-04-01
It is known widely that greater learning occurs when students are active participants. Novel technologies allow instructors the opportunity to create interactive activities for undergraduate students to gain comprehension of complex landscape processes. We incorporated the use of an Augmented Reality (AR) Sandbox in the Introductory Soil Science course at the University of Wyoming to facilitate an experiential learning experience in pedology. The AR Sandbox was developed by researchers at the University of California, Davis as part of a project on informal science education in freshwater lakes and watershed science. It is a hands-on display that allows users to create topography models by shaping sand that is augmented in real-time by a colored elevation maps, topographic contour lines, and simulated water. It uses a 3-dimensional motion sensing camera that detects changes to the distance between the sand surface and the camera sensor. A short-throw projector then displays the elevation model and contour lines in real-time. Undergraduate students enrolled in the Introductory Soil Science course were tasked with creating a virtual landscape and then predicting where particular soils would form on the various landforms. All participants reported a greater comprehension of surface water flow, erosion, and soil formation as a result of this exercise. They provided suggestions for future activities using the AR Sandbox including its incorporation into lessons of watershed hydrology, land management, soil water, and soil genesis.
Kilby, Melissa C; Slobounov, Semyon M; Newell, Karl M
2016-06-01
The experiment manipulated real-time kinematic feedback of the motion of the whole body center of mass (COM) and center of pressure (COP) in anterior-posterior (AP) and medial-lateral (ML) directions to investigate the variables actively controlled in quiet standing of young adults. The feedback reflected the current 2D postural positions within the 2D functional stability boundary that was scaled to 75%, 30% and 12% of its original size. The findings showed that the distance of both COP and COM to the respective stability boundary was greater during the feedback trials compared to a no feedback condition. However, the temporal safety margin of the COP, that is, the virtual time-to-contact (VTC), was higher without feedback. The coupling relation of COP-COM showed stable in-phase synchronization over all of the feedback conditions for frequencies below 1Hz. For higher frequencies (up to 5Hz), there was progressive reduction of COP-COM synchronization and local adaptation under the presence of augmented feedback. The findings show that the augmented feedback of COM and COP motion differentially and adaptively influences spatial and temporal properties of postural motion relative to the stability boundary while preserving the organization of the COM-COP coupling in postural control. Copyright © 2016. Published by Elsevier B.V.
A Survey of Mobile and Wireless Technologies for Augmented Reality Systems (Preprint)
2008-02-01
Windows XP. A number of researchers have started employing them in AR simulations such as Wagner et al [25], Newman et al [46] and specifically the Sony ...different music clubs and styles of music according to the selection and tastes of the listeners. In the intro sequence the user can select an animated...3-D character (avatar) as his or her virtual persona and visit the different music rooms in the virtual disco. Users can download or stream music in
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Ponce, Brent A; Menendez, Mariano E; Oladeji, Lasun O; Fryberger, Charles T; Dantuluri, Phani K
2014-11-01
The authors describe the first surgical case adopting the combination of real-time augmented reality and wearable computing devices such as Google Glass (Google Inc, Mountain View, California). A 66-year-old man presented to their institution for a total shoulder replacement after 5 years of progressive right shoulder pain and decreased range of motion. Throughout the surgical procedure, Google Glass was integrated with the Virtual Interactive Presence and Augmented Reality system (University of Alabama at Birmingham, Birmingham, Alabama), enabling the local surgeon to interact with the remote surgeon within the local surgical field. Surgery was well tolerated by the patient and early surgical results were encouraging, with an improvement of shoulder pain and greater range of motion. The combination of real-time augmented reality and wearable computing devices such as Google Glass holds much promise in the field of surgery. Copyright 2014, SLACK Incorporated.
Usability engineering: domain analysis activities for augmented-reality systems
NASA Astrophysics Data System (ADS)
Gabbard, Joseph; Swan, J. E., II; Hix, Deborah; Lanzagorta, Marco O.; Livingston, Mark; Brown, Dennis B.; Julier, Simon J.
2002-05-01
This paper discusses our usability engineering process for the Battlefield Augmented Reality System (BARS). Usability engineering is a structured, iterative, stepwise development process. Like the related disciplines of software and systems engineering, usability engineering is a combination of management principals and techniques, formal and semi- formal evaluation techniques, and computerized tools. BARS is an outdoor augmented reality system that displays heads- up battlefield intelligence information to a dismounted warrior. The paper discusses our general usability engineering process. We originally developed the process in the context of virtual reality applications, but in this work we are adapting the procedures to an augmented reality system. The focus of this paper is our work on domain analysis, the first activity of the usability engineering process. We describe our plans for and our progress to date on our domain analysis for BARS. We give results in terms of a specific urban battlefield use case we have designed.
NASA Astrophysics Data System (ADS)
Clay, Alexis; Delord, Elric; Couture, Nadine; Domenger, Gaël
We describe the joint research that we conduct in gesture-based emotion recognition and virtual augmentation of a stage, bridging together the fields of computer science and dance. After establishing a common ground for dialogue, we could conduct a research process that equally benefits both fields. As computer scientists, dance is a perfect application case. Dancer's artistic creativity orient our research choices. As dancers, computer science provides new tools for creativity, and more importantly a new point of view that forces us to reconsider dance from its fundamentals. In this paper we hence describe our scientific work and its implications on dance. We provide an overview of our system to augment a ballet stage, taking a dancer's emotion into account. To illustrate our work in both fields, we describe three events that mixed dance, emotion recognition and augmented reality.
Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.
Aromaa, Susanna; Väänänen, Kaisa
2016-09-01
In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Virtual reality and pain management: current trends and future directions.
Li, Angela; Montaño, Zorash; Chen, Vincent J; Gold, Jeffrey I
2011-03-01
Virtual reality (VR) has been used to manage pain and distress associated with a wide variety of known painful medical procedures. In clinical settings and experimental studies, participants immersed in VR experience reduced levels of pain, general distress/unpleasantness and report a desire to use VR again during painful medical procedures. Investigators hypothesize that VR acts as a nonpharmacologic form of analgesia by exerting an array of emotional affective, emotion-based cognitive and attentional processes on the body's intricate pain modulation system. While the exact neurobiological mechanisms behind VR's action remain unclear, investigations are currently underway to examine the complex interplay of cortical activity associated with immersive VR. Recently, new applications, including VR, have been developed to augment evidenced-based interventions, such as hypnosis and biofeedback, for the treatment of chronic pain. This article provides a comprehensive review of the literature, exploring clinical and experimental applications of VR for acute and chronic pain management, focusing specifically on current trends and recent developments. In addition, we propose mechanistic theories highlighting VR distraction and neurobiological explanations, and conclude with new directions in VR research, implications and clinical significance.
Neuroscience, virtual reality and neurorehabilitation: brain repair as a validation of brain theory.
Verschure, Paul F M J
2011-01-01
This paper argues that basing cybertherapy approaches on a theoretical understanding of the brain has advantages. On one hand it provides for a rational approach towards therapy design while on the other allowing for a direct validation of brain theory in the clinic. As an example this paper discusses how the Distributed Adaptive Control architecture, a theory of mind, brain and action, has given rise to a new paradigm in neurorehabilitation called the Rehabilitation Gaming System (RGS) and to novel neuroprosthetic systems. The neuroprosthetic system considered is developed to replace the function of cerebellar micro-circuits, expresses core aspects of the learning systems of DAC and has been successfully tested in in-vivo experiments. The Virtual reality based rehabilitation paradigm of RGS has been validated in the treatment of acute and chronic stroke and has been shown to be more effective than existing methods. RGS provides a foundation for integrated at-home therapy systems that can operate largely autonomously when also augmented with appropriate physiological monitoring and diagnostic devices. These examples provide first steps towards a science based medicine.
Comparative evaluation of monocular augmented-reality display for surgical microscopes.
Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N
2012-01-01
Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.
Bosc, R; Fitoussi, A; Pigneur, F; Tacher, V; Hersant, B; Meningaud, J-P
2017-08-01
The augmented reality on smart glasses allows the surgeon to visualize three-dimensional virtual objects during surgery, superimposed in real time to the anatomy of the patient. This makes it possible to preserve the vision of the surgical field and to dispose of added computerized information without the need to use a physical surgical guide or a deported screen. The three-dimensional objects that we used and visualized in augmented reality came from the reconstructions made from the CT-scans of the patients. These objects have been transferred through a dedicated application on stereoscopic smart glasses. The positioning and the stabilization of the virtual layers on the anatomy of the patients were obtained thanks to the recognition, by the glasses, of a tracker placed on the skin. We used this technology, in addition to the usual locating methods for preoperative planning and the selection of perforating vessels for 12 patients operated on a breast reconstruction, by perforating flap of deep lower epigastric artery. The "hands-free" smart glasses with two stereoscopic screens make it possible to provide the reconstructive surgeon with binocular visualization in the operative field of the vessels identified with the CT-scan. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Augmented reality-guided artery-first pancreatico-duodenectomy.
Marzano, Ettore; Piardi, Tullio; Soler, Luc; Diana, Michele; Mutter, Didier; Marescaux, Jacques; Pessaux, Patrick
2013-11-01
Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD.
Full State Feedback Control for Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Tillay
This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less
Fluet, Gerard G; Deutsch, Judith E
2013-03-01
Developments over the past 2 years in virtual reality (VR) augmented sensorimotor rehabilitation of upper limb use and gait post-stroke were reviewed. Studies were included if they evaluated comparative efficacy between VR and standard of care, and or differences in VR delivery methods; and were CEBM (center for evidence based medicine) level 2 or higher. Eight upper limb and two gait studies were included and described using the following categories hardware (input and output), software (virtual task and feedback and presentation) intervention (progression and dose), and outcomes. Trends in the field were commented on, gaps in knowledge identified, and areas of future research and translation of VR to practice were suggested.
Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.
Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques
2015-04-01
Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
Immersive Technologies and Language Learning
ERIC Educational Resources Information Center
Blyth, Carl
2018-01-01
This article briefly traces the historical conceptualization of linguistic and cultural immersion through technological applications, from the early days of locally networked computers to the cutting-edge technologies known as virtual reality and augmented reality. Next, the article explores the challenges of immersive technologies for the field…
Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J.; Latorre, José M.; Rodriguez-Jimenez, Roberto
2017-01-01
This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis. PMID:29209193
[Registration technology for mandibular angle osteotomy based on augmented reality].
Zhu, Ming; Chai, Gang; Zhang, Yan; Ma, Xiao-Fei; Yu, Zhe-Yuan; Zhu, Yi-Jia
2010-12-01
To establish an effective path to register the operative plan to the real model of mandible made by rapid prototyping (RP) technology. Computerize tomography (CT) was performed on 20 patients to create 3D images, and computer aided operation planning information can be merged with the 3D images. Then dental cast was used to fix the signal which can be recognized by the software. The dental cast was transformed to 3D data with a laser scanner and a programmer that run on a personal computer named Rapidform matching the dental cast and the mandible image to generate the virtual image. Then the registration was achieved by video monitoring system. By using this technology, the virtual image of mandible and the cutting planes both can overlay the real model of mandible made by RP. This study found an effective way for registration by using dental cast, and this way might be a powerful option for the registration of augmented reality. Supported by Program for Innovation Research Team of Shanghai Municipal Education Commission.
Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto
2017-01-01
This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.
On the use of virtual and augmented reality for upper limb prostheses training and simulation.
Lamounier, Edgard; Lopes, Kenedy; Cardoso, Alexandre; Andrade, Adriano; Soares, Alcimar
2010-01-01
Accidents happen and unfortunately people may loose part of their body members. Studies have shown that in this case, most individuals suffer physically and psychologically. For this reason, actions to restore the patient's freedom and mobility are imperative. Traditional solutions require ways to adapt the individual to prosthetic devices. This idea is also applied to patients who have congenital limitations. However, one of the major difficulties faced by those who are fitted with these devices is the great mental effort needed during first stages of training. As a result, a meaningful number of patients give up the use of theses devices very soon. Thus, this article reports on a solution designed by the authors to help patients during the learning phases, without actually having to wear the prosthesis. This solution considers Virtual (VR) and Augmented Reality (AR) techniques to mimic the prosthesis natural counterparts. Thus, it is expected that problems such as weight, heat and pain should not contribute to an already hard task.
Augmented Reality versus Virtual Reality for 3D Object Manipulation.
Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu
2018-02-01
Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.
Rohmer, Kai; Buschel, Wolfgang; Dachselt, Raimund; Grosch, Thorsten
2015-12-01
At present, photorealistic augmentation is not yet possible since the computational power of mobile devices is insufficient. Even streaming solutions from stationary PCs cause a latency that affects user interactions considerably. Therefore, we introduce a differential rendering method that allows for a consistent illumination of the inserted virtual objects on mobile devices, avoiding delays. The computation effort is shared between a stationary PC and the mobile devices to make use of the capacities available on both sides. The method is designed such that only a minimum amount of data has to be transferred asynchronously between the participants. This allows for an interactive illumination of virtual objects with a consistent appearance under both temporally and spatially varying real illumination conditions. To describe the complex near-field illumination in an indoor scenario, HDR video cameras are used to capture the illumination from multiple directions. In this way, sources of illumination can be considered that are not directly visible to the mobile device because of occlusions and the limited field of view. While our method focuses on Lambertian materials, we also provide some initial approaches to approximate non-diffuse virtual objects and thereby allow for a wider field of application at nearly the same cost.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality.
Grubert, Jens; Langlotz, Tobias; Zollmann, Stefanie; Regenbrecht, Holger
2017-06-01
Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user's current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.
Using virtual robot-mediated play activities to assess cognitive skills.
Encarnação, Pedro; Alvarez, Liliana; Rios, Adriana; Maya, Catarina; Adams, Kim; Cook, Al
2014-05-01
To evaluate the feasibility of using virtual robot-mediated play activities to assess cognitive skills. Children with and without disabilities utilized both a physical robot and a matching virtual robot to perform the same play activities. The activities were designed such that successfully performing them is an indication of understanding of the underlying cognitive skills. Participants' performance with both robots was similar when evaluated by the success rates in each of the activities. Session video analysis encompassing participants' behavioral, interaction and communication aspects revealed differences in sustained attention, visuospatial and temporal perception, and self-regulation, favoring the virtual robot. The study shows that virtual robots are a viable alternative to the use of physical robots for assessing children's cognitive skills, with the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support. Virtual robots can provide a vehicle for children to demonstrate cognitive understanding. Virtual and physical robots can be used as augmentative manipulation tools allowing children with disabilities to actively participate in play, educational and therapeutic activities. Virtual robots have the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support.
Design-of-Experiments Approach to Improving Inferior Vena Cava Filter Retrieval Rates.
Makary, Mina S; Shah, Summit H; Warhadpande, Shantanu; Vargas, Ivan G; Sarbinoff, James; Dowell, Joshua D
2017-01-01
The association of retrievable inferior vena cava filters (IVCFs) with adverse events has led to increased interest in prompt retrieval, particularly in younger patients given the progressive nature of these complications over time. This study takes a design-of-experiments (DOE) approach to investigate methods to best improve filter retrieval rates, with a particular focus on younger (<60 years) patients. A DOE approach was executed in which combinations of variables were tested to best improve retrieval rates. The impact of a virtual IVCF clinic, primary care physician (PCP) letters, and discharge instructions was investigated. The decision for filter retrieval in group 1 was determined solely by the referring physician. Group 2 included those patients prospectively followed in an IVCF virtual clinic in which filter retrieval was coordinated by the interventional radiologist when clinically appropriate. In group 3, in addition to being followed through the IVCF clinic, each patient's PCP was faxed a follow-up letter, and information regarding IVCF retrieval was added to the patient's discharge instructions. A total of 10 IVCFs (8.4%) were retrieved among 119 retrievable IVCFs placed in group 1. Implementation of the IVCF clinic in group 2 significantly improved the retrieval rate to 25.3% (23 of 91 retrievable IVCFs placed, P < .05). The addition of discharge instructions and PCP letters to the virtual clinic (group 3) resulted in a retrieval rate of 33.3% (17 of 51). The retrieval rates demonstrated more pronounced improvement when examining only younger patients, with retrieval rates of 11.3% (7 of 62), 29.5% (13 of 44, P < .05), and 45.2% (14 of 31) for groups 1, 2, and 3, respectively. DOE methodology is not routinely executed in health care, but it is an effective approach to evaluating clinical practice behavior and patient quality measures. In this study, implementation of the combination of a virtual clinic, PCP letters, and discharge instructions improved retrieval rates compared with a virtual clinic alone. Quality improvement strategies such as these that augment patient and referring physician knowledge on interventional radiologic procedures may ultimately improve patient safety and personalized care. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Canciani, M.; Conigliaro, E.; Del Grasso, M.; Papalini, P.; Saccone, M.
2016-06-01
The development of close-range photogrammetry has produced a lot of new possibility to study cultural heritage. 3D data acquired with conventional and low cost cameras can be used to document, investigate the full appearance, materials and conservation status, to help the restoration process and identify intervention priorities. At the same time, with 3D survey a lot of three-dimensional data are collected and analyzed by researchers, but there are a very few possibility of 3D output. The augmented reality is one of this possible output with a very low cost technology but a very interesting result. Using simple mobile technology (for iPad and Android Tablets) and shareware software (in the case presented "Augment") it is possible to share and visualize a large number of 3D models with your own device. The case study presented is a part of an architecture graduate thesis, made in Rome at Department of Architecture of Roma Tre University. We have developed a photogrammetric survey to study the Aurelian Wall at Castra Praetoria in Rome. The surveys of 8000 square meters of surface have allowed to identify stratigraphy and construction phases of a complex portion of Aurelian Wall, specially about the Northern door of Castra. During this study, the data coming out of 3D survey (photogrammetric and topographic), are stored and used to create a reverse 3D model, or virtual reconstruction, of the Northern door of Castra. This virtual reconstruction shows the door in the Tiberian period, nowadays it's totally hidden by a curtain wall but, little and significative architectural details allow to know its original feature. The 3D model of the ancient walls has been mapped with the exact type of bricks and mortar, oriented and scaled according to the existing one to use augmented reality. Finally, two kind of application have been developed, one on site, were you can see superimposed the virtual reconstruction on the existing walls using the image recognition. On the other hand, to show the results also during the graduation day, the same application has been created in off-site condition using a poster.
Aggarwal, Rajesh; Balasundaram, Indran; Darzi, Ara
2008-03-01
Within the past decade, there has been increasing interest in simulation-based devices for training and assessment of technical skills, especially for minimally invasive techniques such as laparoscopy. The aim of this study was to investigate the perceptions of senior and junior surgeons to virtual reality simulation within the context of current training opportunities for basic laparoscopic procedures. A postal questionnaire was sent to 245 consultants and their corresponding specialist registrar (SpR), detailing laparoscopic surgical practice and their knowledge and use of virtual reality (VR) surgical simulators. One hundred ninety-one (78%) consultants and 103(42%) SpRs returned questionnaires; 16%(10/61) of junior SpRs (year 1-4) had performed more than 50 laparoscopic cholecystectomies to date compared with 76% (32/42) of senior SpRs (year 5-6) (P < 0.001); 90% (55/61) of junior SpRs and 67% (28/42) of senior SpRs were keen to augment their training with VR (P = 0.007); 81% (238/294) of all surgeons agreed that VR has a useful role in the laparoscopic surgical training curriculum. There is a lack of experience in index laparoscopic cases of junior SpRs, and laparoscopic VR simulation is recognized as a useful mode of practice to acquire technical skills. This should encourage surgical program directors to drive the integration of simulation-based training into the surgical curriculum.
Application of Virtual and Augmented reality to geoscientific teaching and research.
NASA Astrophysics Data System (ADS)
Hodgetts, David
2017-04-01
The geological sciences are the ideal candidate for the application of Virtual Reality (VR) and Augmented Reality (AR). Digital data collection techniques such as laser scanning, digital photogrammetry and the increasing use of Unmanned Aerial Vehicles (UAV) or Small Unmanned Aircraft (SUA) technology allow us to collect large datasets efficiently and evermore affordably. This linked with the recent resurgence in VR and AR technologies make these 3D digital datasets even more valuable. These advances in VR and AR have been further supported by rapid improvements in graphics card technologies, and by development of high performance software applications to support them. Visualising data in VR is more complex than normal 3D rendering, consideration needs to be given to latency, frame-rate and the comfort of the viewer to enable reasonably long immersion time. Each frame has to be rendered from 2 viewpoints (one for each eye) requiring twice the rendering than for normal monoscopic views. Any unnatural effects (e.g. incorrect lighting) can lead to an uncomfortable VR experience so these have to be minimised. With large digital outcrop datasets comprising 10's-100's of millions of triangles this is challenging but achievable. Apart from the obvious "wow factor" of VR there are some serious applications. It is often the case that users of digital outcrop data do not appreciate the size of features they are dealing with. This is not the case when using correctly scaled VR, and a true sense of scale can be achieved. In addition VR provides an excellent way of performing quality control on 3D models and interpretations and errors are much more easily visible. VR models can then be used to create content that can then be used in AR applications closing the loop and taking interpretations back into the field.
Wang, Huixiang; Wang, Fang; Leong, Anthony Peng Yew; Xu, Lu; Chen, Xiaojun; Wang, Qiugen
2016-09-01
Augmented reality (AR) enables superimposition of virtual images onto the real world. The aim of this study is to present a novel AR-based navigation system for sacroiliac screw insertion and to evaluate its feasibility and accuracy in cadaveric experiments. Six cadavers with intact pelvises were employed in our study. They were CT scanned and the pelvis and vessels were segmented into 3D models. The ideal trajectory of the sacroiliac screw was planned and represented visually as a cylinder. For the intervention, the head mounted display created a real-time AR environment by superimposing the virtual 3D models onto the surgeon's field of view. The screws were drilled into the pelvis as guided by the trajectory represented by the cylinder. Following the intervention, a repeat CT scan was performed to evaluate the accuracy of the system, by assessing the screw positions and the deviations between the planned trajectories and inserted screws. Post-operative CT images showed that all 12 screws were correctly placed with no perforation. The mean deviation between the planned trajectories and the inserted screws was 2.7 ± 1.2 mm at the bony entry point, 3.7 ± 1.1 mm at the screw tip, and the mean angular deviation between the two trajectories was 2.9° ± 1.1°. The mean deviation at the nerve root tunnels region on the sagittal plane was 3.6 ± 1.0 mm. This study suggests an intuitive approach for guiding screw placement by way of AR-based navigation. This approach was feasible and accurate. It may serve as a valuable tool for assisting percutaneous sacroiliac screw insertion in live surgery.
Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference?
Botden, Sanne M.B.I.; Buzink, Sonja N.; Schijven, Marlies P.
2007-01-01
Background Virtual reality (VR) is an emerging new modality for laparoscopic skills training; however, most simulators lack realistic haptic feedback. Augmented reality (AR) is a new laparoscopic simulation system offering a combination of physical objects and VR simulation. Laparoscopic instruments are used within an hybrid mannequin on tissue or objects while using video tracking. This study was designed to assess the difference in realism, haptic feedback, and didactic value between AR and VR laparoscopic simulation. Methods The ProMIS AR and LapSim VR simulators were used in this study. The participants performed a basic skills task and a suturing task on both simulators, after which they filled out a questionnaire about their demographics and their opinion of both simulators scored on a 5-point Likert scale. The participants were allotted to 3 groups depending on their experience: experts, intermediates and novices. Significant differences were calculated with the paired t-test. Results There was general consensus in all groups that the ProMIS AR laparoscopic simulator is more realistic than the LapSim VR laparoscopic simulator in both the basic skills task (mean 4.22 resp. 2.18, P < 0.000) as well as the suturing task (mean 4.15 resp. 1.85, P < 0.000). The ProMIS is regarded as having better haptic feedback (mean 3.92 resp. 1.92, P < 0.000) and as being more useful for training surgical residents (mean 4.51 resp. 2.94, P < 0.000). Conclusions In comparison with the VR simulator, the AR laparoscopic simulator was regarded by all participants as a better simulator for laparoscopic skills training on all tested features. PMID:17361356
Chan, Teresa; Sennik, Serena; Zaki, Amna; Trotter, Brendon
2015-03-01
Cloud-based applications such as Google Docs, Skype, Dropbox, and SugarSync are revolutionizing the way that we interact with the world. Members of the millennial generation (those born after 1980) are now becoming senior residents and junior attending physicians. We describe a novel technique combining Internet- and cloud-based methods to digitally augment the classic study group used by final-year residents studying for the Royal College of Physicians and Surgeons of Canada examination. This material was developed by residents and improved over the course of 18 months. This is an innovation report about a process for enhanced communication and collaboration as there has been little research to date regarding the augmentation of learner-driven initiatives with virtual resources.
Use of display technologies for augmented reality enhancement
NASA Astrophysics Data System (ADS)
Harding, Kevin
2016-06-01
Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Alaraj, Ali; Charbel, Fady T.; Birk, Daniel; Tobin, Mathew; Luciano, Cristian; Banerjee, Pat P.; Rizzi, Silvio; Sorenson, Jeff; Foley, Kevin; Slavin, Konstantin; Roitberg, Ben
2013-01-01
Recent studies have shown that mental script-based rehearsal and simulation-based training improves the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, with reduction of working hours and current trends to focus on patient’s safety and linking reimbursement with clinical outcomes, and there is a need for adjunctive means for neurosurgical training;this has been recent advancement in simulation technology. ImmersiveTouch (IT) is an augmented reality (AR) system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform utilizes multiple sensory modalities, recreating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, in addition to simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with development of such AR neurosurgical modules and the feedback from neurosurgical residents. PMID:23254799
Applied virtual reality at the Research Triangle Institute
NASA Technical Reports Server (NTRS)
Montoya, R. Jorge
1994-01-01
Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.
Internet virtual studio: low-cost augmented reality system for WebTV
NASA Astrophysics Data System (ADS)
Sitnik, Robert; Pasko, Slawomir; Karaszewski, Maciej; Witkowski, Marcin
2008-02-01
In this paper a concept of a Internet Virtual Studio as a modern system for production of news, entertainment, educational and training material is proposed. This system is based on virtual studio technology and integrated with multimedia data base. Its was developed for web television content production. In successive subentries the general system architecture, as well as the architecture of modules one by one is discussed. The authors describe each module by presentation of a brief information about work principles and technical limitations. The presentation of modules is strictly connected with a presentation of their capabilities. Results produced by each of them are shown in the form of exemplary images. Finally, exemplary short production is presented and discussed.
Fluet, Gerard G.
2013-01-01
Developments over the past 2 years in virtual reality (VR) augmented sensorimotor rehabilitation of upper limb use and gait post-stroke were reviewed. Studies were included if they evaluated comparative efficacy between VR and standard of care, and or differences in VR delivery methods; and were CEBM (center for evidence based medicine) level 2 or higher. Eight upper limb and two gait studies were included and described using the following categories hardware (input and output), software (virtual task and feedback and presentation) intervention (progression and dose), and outcomes. Trends in the field were commented on, gaps in knowledge identified, and areas of future research and translation of VR to practice were suggested. PMID:24579058
Webizing mobile augmented reality content
NASA Astrophysics Data System (ADS)
Ahn, Sangchul; Ko, Heedong; Yoo, Byounghyun
2014-01-01
This paper presents a content structure for building mobile augmented reality (AR) applications in HTML5 to achieve a clean separation of the mobile AR content and the application logic for scaling as on the Web. We propose that the content structure contains the physical world as well as virtual assets for mobile AR applications as document object model (DOM) elements and that their behaviour and user interactions are controlled through DOM events by representing objects and places with a uniform resource identifier. Our content structure enables mobile AR applications to be seamlessly developed as normal HTML documents under the current Web eco-system.
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
Virtual reality and physical rehabilitation: a new toy or a new research and rehabilitation tool?
Keshner, Emily A
2004-01-01
Virtual reality (VR) technology is rapidly becoming a popular application for physical rehabilitation and motor control research. But questions remain about whether this technology really extends our ability to influence the nervous system or whether moving within a virtual environment just motivates the individual to perform. I served as guest editor of this month's issue of the Journal of NeuroEngineering and Rehabilitation (JNER) for a group of papers on augmented and virtual reality in rehabilitation. These papers demonstrate a variety of approaches taken for applying VR technology to physical rehabilitation. The papers by Kenyon et al. and Sparto et al. address critical questions about how this technology can be applied to physical rehabilitation and research. The papers by Sveistrup and Viau et al. explore whether action within a virtual environment is equivalent to motor performance within the physical environment. Finally, papers by Riva et al. and Weiss et al. discuss the important characteristics of a virtual environment that will be most effective for obtaining changes in the motor system. PMID:15679943
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
Virtual reality gaming in the rehabilitation of the upper extremities post-stroke.
Yates, Michael; Kelemen, Arpad; Sik Lanyi, Cecilia
2016-01-01
Occurrences of strokes often result in unilateral upper limb dysfunction. Dysfunctions of this nature frequently persist and can present chronic limitations to activities of daily living. Research into applying virtual reality gaming systems to provide rehabilitation therapy have seen resurgence. Themes explored in stroke rehab for paretic limbs are action observation and imitation, versatility, intensity and repetition and preservation of gains. Fifteen articles were ultimately selected for review. The purpose of this literature review is to compare the various virtual reality gaming modalities in the current literature and ascertain their efficacy. The literature supports the use of virtual reality gaming rehab therapy as equivalent to traditional therapies or as successful augmentation to those therapies. While some degree of rigor was displayed in the literature, small sample sizes, variation in study lengths and therapy durations and unequal controls reduce generalizability and comparability. Future studies should incorporate larger sample sizes and post-intervention follow-up measures.
An intelligent control and virtual display system for evolutionary space station workstation design
NASA Technical Reports Server (NTRS)
Feng, Xin; Niederjohn, Russell J.; Mcgreevy, Michael W.
1992-01-01
Research and development of the Advanced Display and Computer Augmented Control System (ADCACS) for the space station Body-Ported Cupola Virtual Workstation (BP/VCWS) were pursued. The potential applications were explored of body ported virtual display and intelligent control technology for the human-system interfacing applications is space station environment. The new system is designed to enable crew members to control and monitor a variety of space operations with greater flexibility and efficiency than existing fixed consoles. The technologies being studied include helmet mounted virtual displays, voice and special command input devices, and microprocessor based intelligent controllers. Several research topics, such as human factors, decision support expert systems, and wide field of view, color displays are being addressed. The study showed the significant advantages of this uniquely integrated display and control system, and its feasibility for human-system interfacing applications in the space station command and control environment.
Designing for Virtual Windows in a Deep Space Habitat
NASA Technical Reports Server (NTRS)
Howe, A. Scott; Howard, Robert L.; Moore, Nathan; Amoroso, Michael
2013-01-01
This paper discusses configurations and test analogs toward the design of a virtual window capability in a Deep Space Habitat. Long-duration space missions will require crews to remain in the confines of a spacecraft for extended periods of time, with possible harmful effects if a crewmember cannot cope with the small habitable volume. Virtual windows expand perceived volume using a minimal amount of image projection equipment and computing resources, and allow a limited immersion in remote environments. Uses for the virtual window include: live or augmented reality views of the external environment; flight deck, piloting, observation, or other participation in remote missions through live transmission of cameras mounted to remote vehicles; pre-recorded background views of nature areas, seasonal occurrences, or cultural events; and pre-recorded events such as birthdays, anniversaries, and other meaningful events prepared by ground support and families of the crewmembers.
A Process Study of the Development of Virtual Research Environments
NASA Astrophysics Data System (ADS)
Ahmed, I.; Cooper, K.; McGrath, R.; Griego, G.; Poole, M. S.; Hanisch, R. J.
2014-05-01
In recent years, cyberinfrastructures have been deployed to create virtual research environments (VREs) - such as the Virtual Astronomical Observatory (VAO) - to enhance the quality and speed of scientific research, and to foster global scientific communities. Our study utilizes process methodology to study the evolution of VREs. This approach focuses on a series of events that bring about or lead to some outcome, and attempts to specify the generative mechanism that could produce the event series. This paper briefly outlines our approach and describes initial results of a case study of the VAO, one of the participating VREs. The case study is based on interviews with seven individuals participating in the VAO, and analysis of project documents and online resources. These sources are hand tagged to identify events related to the thematic tracks, to yield a narrative of the project. Results demonstrate the event series of an organization through traditional methods augmented by virtual sources.
The assessment of virtual reality for human anatomy instruction
NASA Technical Reports Server (NTRS)
Benn, Karen P.
1994-01-01
This research project seeks to meet the objective of science training by developing, assessing, and validating virtual reality as a human anatomy training medium. In ideal situations, anatomic models, computer-based instruction, and cadaver dissection are utilized to augment the traditional methods of instruction. At many institutions, lack of financial resources limits anatomy instruction to textbooks and lectures. However, human anatomy is three dimensional, unlike the one dimensional depiction found in textbooks and the two dimensional depiction found on the computer. Virtual reality is a breakthrough technology that allows one to step through the computer screen into a three dimensional world. This technology offers many opportunities to enhance science education. Therefore, a virtual testing environment of the abdominopelvic region of a human cadaver was created to study the placement of body parts within the nine anatomical divisions of the abdominopelvic region and the four abdominal quadrants.
Ionosphere Profile Estimation Using Ionosonde & GPS Data in an Inverse Refraction Calculation
NASA Astrophysics Data System (ADS)
Psiaki, M. L.
2014-12-01
A method has been developed to assimilate ionosonde virtual heights and GPS slant TEC data to estimate the parameters of a local ionosphere model, including estimates of the topside and of latitude and longitude variations. This effort seeks to better assimilate a variety of remote sensing data in order to characterize local (and eventually regional and global) ionosphere electron density profiles. The core calculations involve a forward refractive ray-tracing solution and a nonlinear optimal estimation algorithm that inverts the forward model. The ray-tracing calculations solve a nonlinear two-point boundary value problem for the curved ionosonde or GPS ray path through a parameterized electron density profile. It implements a full 3D solution that can handle the case of a tilted ionosphere. These calculations use Hamiltonian equivalents of the Appleton-Hartree magneto-plasma refraction index model. The current ionosphere parameterization is a modified Booker profile. It has been augmented to include latitude and longitude dependencies. The forward ray-tracing solution yields a given signal's group delay and beat carrier phase observables. An auxiliary set of boundary value problem solutions determine the sensitivities of the ray paths and observables with respect to the parameters of the augmented Booker profile. The nonlinear estimation algorithm compares the measured ionosonde virtual-altitude observables and GPS slant-TEC observables to the corresponding values from the forward refraction model. It uses the parameter sensitivities of the model to iteratively improve its parameter estimates in a way the reduces the residual errors between the measurements and their modeled values. This method has been applied to data from HAARP in Gakona, AK and has produced good TEC and virtual height fits. It has been extended to characterize electron density perturbations caused by HAARP heating experiments through the use of GPS slant TEC data for an LOS through the heated zone. The next planned extension of the method is to estimate the parameters of a regional ionosphere profile. The input observables will be slant TEC from an array of GPS receivers and group delay and carrier phase observables from an array of high-frequency beacons. The beacon array will function as a sort of multi-static ionosonde.
Real-time 3D image reconstruction guidance in liver resection surgery
Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-01-01
Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598
USDA-ARS?s Scientific Manuscript database
Host defense peptides (HDPs) constitute a large group of natural broad-spectrum antimicrobials and an important first line of immunity in virtually all forms of life. Specific augmentation of synthesis of endogenous HDPs may represent a promising antibiotic-alternative approach to disease control. I...
When Worlds Collide: An Augmented Reality Check
ERIC Educational Resources Information Center
Villano, Matt
2008-01-01
The technology is simple: Mobile technologies such as handheld computers and global positioning systems work in sync to create an alternate, hybrid world that mixes virtual characters with the actual physical environment. The result is a digital simulation that offers powerful game-playing opportunities and allows students to become more engaged…
ERIC Educational Resources Information Center
Achuthan, Krishnashree; Francis, Saneesh P.; Diwakar, Shyam
2017-01-01
Learning theories converge on the principles of reflective learning processes and perceive them as fundamental to effective learning. Traditional laboratory education in science and engineering often happens in highly resource-constrained environments that compromise some of the learning objectives. This paper focuses on characterizing three…
Genome size in Anthurium evaluated in the context of karyotypes and phenotypes
USDA-ARS?s Scientific Manuscript database
Anthurium is an important horticultural flower crop from family Araceae in order Alismatales, a monocot lineage considered to have diverged from other monocots prior to the divergence of the cereals lineage. Currently there is a virtual lack of molecular-genetic resources that would greatly augment...
Incorporating Technology in Teaching Musical Instruments
ERIC Educational Resources Information Center
Prodan, Angelica
2017-01-01
After discussing some of the drawbacks of using Skype for long distance music lessons, Angelica Prodan describes three different types of Artificial Reality (Virtual Reality, Augmented Reality and Mixed or Merged Reality). She goes on to describe the beneficial applications of technology, with results otherwise impossible to achieve in areas such…
NASA Astrophysics Data System (ADS)
De Breuck, Carlos
2018-03-01
The APEX telescope has a range instruments that are highly complementary to ALMA. The single pixel heterodyne receivers cover virtually all atmospheric windows from 157 GHz to above 1 THz, augmented by 7-pixel heterodyne arrays covering 280 to 950 GHz, while the bolometer arrays cover the 870, 450 and 350µm bands.
Implementation of augmented reality to models sultan deli
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.
2018-03-01
Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.
Jiang, Taoran; Zhu, Ming; Zan, Tao; Gu, Bin; Li, Qingfeng
2017-08-01
In perforator flap transplantation, dissection of the perforator is an important but difficult procedure because of the high variability in vascular anatomy. Preoperative imaging techniques could provide substantial information about vascular anatomy; however, it cannot provide direct guidance for surgeons during the operation. In this study, a navigation system (NS) was established to overlie a vascular map on surgical sites to further provide a direct guide for perforator flap transplantation. The NS was established based on computed tomographic angiography and augmented reality techniques. A virtual vascular map was reconstructed according to computed tomographic angiography data and projected onto real patient images using ARToolKit software. Additionally, a screw-fixation marker holder was created to facilitate registration. With the use of a tracking and display system, we conducted the NS on an animal model and measured the system error on a rapid prototyping model. The NS assistance allowed for correct identification, as well as a safe and precise dissection of the perforator. The mean value of the system error was determined to be 3.474 ± 1.546 mm. Augmented reality-based NS can provide precise navigation information by directly displaying a 3-dimensional individual anatomical virtual model onto the operative field in real time. It will allow rapid identification and safe dissection of a perforator in free flap transplantation surgery.
Hybrid diffractive-refractive optical system design of head-mounted display for augmented reality
NASA Astrophysics Data System (ADS)
Zhang, Huijuan
2005-02-01
An optical see-through head-mounted display for augmented reality is designed in this paper. Considering the factors, such as the optical performance, the utilization ratios of energy of real world and virtual world, the feelings of users when he wears it and etc., a structure of the optical see-through is adopted. With the characteristics of the particular negative dispersive and the power of realizing random-phase modulation, the diffractive surface is helpful for optical system of reducing weight, simplifying structure and etc., and a diffractive surface is introduced in our optical system. The optical system with 25 mm eye relief, 12 mm exit pupil and 20° (H)x15.4° (V) field-of-view is designed. The utilization ratios of energy of real world and virtual world are 1/4 and 1/2, respectively. The angular resolution of display is 0.27 mrad and it less than that of the minimum of human eyes. The diameter of this system is less than 46mm, and it applies the binocular. This diffractive-refractive optical system of see-through head-mounted display not only satisfies the demands of user"s factors in structure, but also with high resolution, very small chromatic aberration and distortion, and satisfies the need of augmented reality. In the end, the parameters of the diffractive surface are discussed.
On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.
Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir
2018-04-01
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Augmented Reality-Guided Lumbar Facet Joint Injections.
Agten, Christoph A; Dennler, Cyrill; Rosskopf, Andrea B; Jaberg, Laurenz; Pfirrmann, Christian W A; Farshad, Mazda
2018-05-08
The aim of this study was to assess feasibility and accuracy of augmented reality-guided lumbar facet joint injections. A spine phantom completely embedded in hardened opaque agar with 3 ring markers was built. A 3-dimensional model of the phantom was uploaded to an augmented reality headset (Microsoft HoloLens). Two radiologists independently performed 20 augmented reality-guided and 20 computed tomography (CT)-guided facet joint injections each: for each augmented reality-guided injection, the hologram was manually aligned with the phantom container using the ring markers. The radiologists targeted the virtual facet joint and tried to place the needle tip in the holographic joint space. Computed tomography was performed after each needle placement to document final needle tip position. Time needed from grabbing the needle to final needle placement was measured for each simulated injection. An independent radiologist rated images of all needle placements in a randomized order blinded to modality (augmented reality vs CT) and performer as perfect, acceptable, incorrect, or unsafe. Accuracy and time to place needles were compared between augmented reality-guided and CT-guided facet joint injections. In total, 39/40 (97.5%) of augmented reality-guided needle placements were either perfect or acceptable compared with 40/40 (100%) CT-guided needle placements (P = 0.5). One augmented reality-guided injection missed the facet joint space by 2 mm. No unsafe needle placements occurred. Time to final needle placement was substantially faster with augmented reality guidance (mean 14 ± 6 seconds vs 39 ± 15 seconds, P < 0.001 for both readers). Augmented reality-guided facet joint injections are feasible and accurate without potentially harmful needle placement in an experimental setting.
Augmented Reality Learning Experiences: Survey of Prototype Design and Evaluation
ERIC Educational Resources Information Center
Santos, Marc Ericson C.; Chen, Angie; Taketomi, Takafumi; Yamamoto, Goshiro; Miyazaki, Jun; Kato, Hirokazu
2014-01-01
Augmented reality (AR) technology is mature for creating learning experiences for K-12 (pre-school, grade school, and high school) educational settings. We reviewed the applications intended to complement traditional curriculum materials for K-12. We found 87 research articles on augmented reality learning experiences (ARLEs) in the IEEE Xplore…
Feasibility of Virtual Reality Augmented Cycling for Health Promotion of People Post-Stroke
Deutsch, Judith E; Myslinski, Mary Jane; Kafri, Michal; Ranky, Richard; Sivak, Mark; Mavroidis, Constantinos; Lewis, Jeffrey A
2013-01-01
Background and Purpose A virtual reality (VR) augmented cycling kit (VRACK) was developed to address motor control and fitness deficits of individuals with chronic stroke. In this paper we report on the safety, feasibility and efficacy of using the VRACK to train cardio-respiratory (CR) fitness of individuals in the chronic phase poststroke. Methods Four individuals with chronic stroke (47–65 years old and three or more years post-stroke), with residual lower extremity impairments (Fugl Meyer 24–26/34) who were limited community ambulators (gait speed range 0.56 to 1.1 m/s) participated in this study. Safety was defined as the absence of adverse events. Feasibility was measured using attendance, total exercise time, and “involvement” measured with the Presence Questionnaire (PQ). Efficacy of CR fitness was evaluated using a sub-maximal bicycle ergometer test before and after an 8-week training program. Results The intervention was safe and feasible with participants having 1 adverse event, 100% adherence, achieving between 90 and 125 minutes of cycling each week and a mean PQ score of 39 (SD 3.3). There was a statistically significant 13% (p = 0.035) improvement in peak VO2 with a range of 6–24.5 %. Discussion and Conclusion For these individuals post-stroke, VR augmented cycling, using their heart rate to set their avatar’s speed, fostered training of sufficient duration and intensity to promote CR fitness. In addition, there was a transfer of training from the bicycle to walking endurance. VR augmented cycling may be an addition to the therapist’s tools for concurrent training of mobility and health promotion of individuals post-stroke. Video Abstract available (see Video, Supplemental Digital Content 1) for more insights from the authors. PMID:23863828
Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges
NASA Astrophysics Data System (ADS)
Cherukuru, N. W.; Calhoun, R.
2016-06-01
Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.
Anesthesiology training using 3D imaging and virtual reality
NASA Astrophysics Data System (ADS)
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
1996-04-01
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
Virtual reality applications for motor rehabilitation after stroke.
Sisto, Sue Ann; Forrest, Gail F; Glendinning, Diana
2002-01-01
Hemiparesis is the primary physical impairment underlying functional disability after stroke. A goal of rehabilitation is to enhance motor skill acquisition, which is a direct result of practice. However, frequency and duration of practice are limited in rehabilitation. Virtual reality (VR) is a computer technology that simulates real-life learning while providing augmented feedback and increased frequency, duration, and intensity of practiced tasks. The rate and extent of relearning of motor tasks could affect the duration, effectiveness, and cost of patient care. The purpose of this article is to review the use of VR training for motor rehabilitation after stroke.
Virtual reality and pain management: current trends and future directions
Li, Angela; Montaño, Zorash; Chen, Vincent J; Gold, Jeffrey I
2011-01-01
SUMMARY Virtual reality (VR) has been used to manage pain and distress associated with a wide variety of known painful medical procedures. In clinical settings and experimental studies, participants immersed in VR experience reduced levels of pain, general distress/unpleasantness and report a desire to use VR again during painful medical procedures. Investigators hypothesize that VR acts as a nonpharmacologic form of analgesia by exerting an array of emotional affective, emotion-based cognitive and attentional processes on the body’s intricate pain modulation system. While the exact neurobiological mechanisms behind VR’s action remain unclear, investigations are currently underway to examine the complex interplay of cortical activity associated with immersive VR. Recently, new applications, including VR, have been developed to augment evidenced-based interventions, such as hypnosis and biofeedback, for the treatment of chronic pain. This article provides a comprehensive review of the literature, exploring clinical and experimental applications of VR for acute and chronic pain management, focusing specifically on current trends and recent developments. In addition, we propose mechanistic theories highlighting VR distraction and neurobiological explanations, and conclude with new directions in VR research, implications and clinical significance. PMID:21779307
Robotics technology developments in the United States space telerobotics program
NASA Technical Reports Server (NTRS)
Lavery, David
1994-01-01
In the same way that the launch of Yuri Gagarin in April 1961 announced the beginning of human space flight, last year's flight of the German ROTEX robot flight experiment is heralding the start of a new era of space robotics. After a gap of twelve years since the introduction of a new capability in space remote manipulation, ROTEX is the first of at least ten new robotic systems and experiments which will fly before the year 2000. As a result of redefining the development approach for space robotic systems, and capitalizing on opportunities associated with the assembly and maintenance of the space station, the space robotics community is preparing a whole new generation of operational robotic capabilities. Expanding on the capabilities of earlier manipulation systems such as the Viking and Surveyor soil scoops, the Russian Lunakhods, and the Shuttle Remote Manipulator System (RMS), these new space robots will augment astronaut on-orbit capabilities and extend virtual human presence to lunar and planetary surfaces.
Biocybrid systems and the re-engineering of life
NASA Astrophysics Data System (ADS)
Domingues, Diana; Ferreira da Rocha, Adson; Hamdan, Camila; Augusto, Leci; Miosso, Cristiano Jacques
2011-03-01
The reengineering of life expanded by perceptual experiences in the sense of presence in Virtual Reality and Augmented Reality is the theme of our investigation in collaborative practices confirming the artistś creativity close to the inventivity of scientists and mutual capacity for the generation of biocybrid systems. We consider the enactive bodily interfaces for human existence being co-located in the continuum and symbiotic zone between body and flesh - cyberspace and data - and the hybrid properties of physical world. That continuum generates a biocybrid zone (Bio+cyber+hybrid) and the life is reinvented. Results reaffirm the creative reality of coupled body and mutual influences with environment information, enhancing James Gibson's ecological perception theory. The ecosystem life in its dynamical relations between human, animal, plants, landscapes, urban life and objects, bring questions and challenges for artworks and the reengineering of life discussed in our artworks in technoscience. Finally, we describe an implementation in which the immersion experience is enhanced by the datavisualization of biological audio signals and by using wearable miniaturized devices for biofeedback.
Deutsch, Judith E
2009-01-01
Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.
The Effectiveness of Virtual and Augmented Reality in Health Sciences and Medical Anatomy
ERIC Educational Resources Information Center
Moro, Christian; Štromberga, Zane; Raikos, Athanasios; Stirling, Allan
2017-01-01
Although cadavers constitute the gold standard for teaching anatomy to medical and health science students, there are substantial financial, ethical, and supervisory constraints on their use. In addition, although anatomy remains one of the fundamental areas of medical education, universities have decreased the hours allocated to teaching gross…
A Review on Making Things See: Augmented Reality for Futuristic Virtual Educator
ERIC Educational Resources Information Center
Iqbal, Javid; Sidhu, Manjit Singh
2017-01-01
In the past few years many choreographers have focused upon implementation of computer technology to enhance their artistic skills. Computer vision technology presents new methods for learning, instructing, developing, and assessing physical movements as well as provides scope to expand dance resources and rediscover the learning process. This…
The Role of Immersive Media in Online Education
ERIC Educational Resources Information Center
Bronack, Stephen C.
2011-01-01
An increasing number of educators are integrating immersive media into core course offerings. Virtual worlds, serious games, simulations, and augmented reality are enabling students and instructors to connect with content and with one another in novel ways. As a result, many are investigating the new affordances these media provide and the impact…
Best-Practice Model for Technology Enhanced Learning in the Creative Arts
ERIC Educational Resources Information Center
Power, Jess; Kannara, Vidya
2016-01-01
This paper presents a best-practice model for the redesign of virtual learning environments (VLEs) within creative arts to augment blended learning. In considering a blended learning best-practice model, three factors should be considered: the conscious and active human intervention, good learning design and pedagogical input, and the sensitive…
The development of augmented video system on postcards
NASA Astrophysics Data System (ADS)
Chen, Chien-Hsu; Chou, Yin-Ju
2013-03-01
This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.
Interactive Molecular Graphics for Augmented Reality Using HoloLens.
Müller, Christoph; Krone, Michael; Huber, Markus; Biener, Verena; Herr, Dominik; Koch, Steffen; Reina, Guido; Weiskopf, Daniel; Ertl, Thomas
2018-06-13
Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.
Designing and researching of the virtual display system based on the prism elements
NASA Astrophysics Data System (ADS)
Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.
2014-05-01
Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.
2015-08-01
This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
Virtual Reality Cue Refusal Video Game for Alcohol and Cigarette Recovery Support: Summative Study.
Metcalf, Mary; Rossie, Karen; Stokes, Katie; Tallman, Christina; Tanner, Bradley
2018-04-16
New technologies such as virtual reality, augmented reality, and video games hold promise to support and enhance individuals in addiction treatment and recovery. Quitting or decreasing cigarette or alcohol use can lead to significant health improvements for individuals, decreasing heart disease risk and cancer risks (for both nicotine and alcohol use), among others. However, remaining in recovery from use is a significant challenge for most individuals. We developed and assessed the Take Control game, a partially immersive Kinect for Windows platform game that allows users to counter substance cues through active movements (hitting, kicking, etc). Formative analysis during phase I and phase II guided development. We conducted a small wait-list control trial using a quasi-random sampling technique (systematic) with 61 participants in recovery from addiction to alcohol or tobacco. Participants used the game 3 times and reported on substance use, cravings, satisfaction with the game experience, self-efficacy related to recovery, and side effects from exposure to a virtual reality intervention and substance cues. Participants found the game engaging and fun and felt playing the game would support recovery efforts. On average, reported substance use decreased for participants during the intervention period. Participants in recovery for alcohol use saw more benefit than those in recovery for tobacco use, with a statistically significant increase in self-efficacy, attitude, and behavior during the intervention. Side effects from the use of a virtual reality intervention were minor and decreased over time; cravings and side effects also decreased during the study. The preliminary results suggest the intervention holds promise as an adjunct to standard treatment for those in recovery, particularly from alcohol use. ©Mary Metcalf, Karen Rossie, Katie Stokes, Christina Tallman, Bradley Tanner. Originally published in JMIR Serious Games (http://games.jmir.org), 16.04.2018.
Virtual Reality Cue Refusal Video Game for Alcohol and Cigarette Recovery Support: Summative Study
Rossie, Karen; Stokes, Katie; Tallman, Christina; Tanner, Bradley
2018-01-01
Background New technologies such as virtual reality, augmented reality, and video games hold promise to support and enhance individuals in addiction treatment and recovery. Quitting or decreasing cigarette or alcohol use can lead to significant health improvements for individuals, decreasing heart disease risk and cancer risks (for both nicotine and alcohol use), among others. However, remaining in recovery from use is a significant challenge for most individuals. Objective We developed and assessed the Take Control game, a partially immersive Kinect for Windows platform game that allows users to counter substance cues through active movements (hitting, kicking, etc). Methods Formative analysis during phase I and phase II guided development. We conducted a small wait-list control trial using a quasi-random sampling technique (systematic) with 61 participants in recovery from addiction to alcohol or tobacco. Participants used the game 3 times and reported on substance use, cravings, satisfaction with the game experience, self-efficacy related to recovery, and side effects from exposure to a virtual reality intervention and substance cues. Results Participants found the game engaging and fun and felt playing the game would support recovery efforts. On average, reported substance use decreased for participants during the intervention period. Participants in recovery for alcohol use saw more benefit than those in recovery for tobacco use, with a statistically significant increase in self-efficacy, attitude, and behavior during the intervention. Side effects from the use of a virtual reality intervention were minor and decreased over time; cravings and side effects also decreased during the study. Conclusions The preliminary results suggest the intervention holds promise as an adjunct to standard treatment for those in recovery, particularly from alcohol use. PMID:29661748
3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.
2015-08-01
Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
E-Learning Application of Tarsier with Virtual Reality using Android Platform
NASA Astrophysics Data System (ADS)
Oroh, H. N.; Munir, R.; Paseru, D.
2017-01-01
Spectral Tarsier is a primitive primate that can only be found in the province of North Sulawesi. To study these primates can be used an e-learning application with Augmented Reality technology that uses a marker to confronted the camera computer to interact with three dimensions Tarsier object. But that application only shows tarsier object in three dimensions without habitat and requires a lot of resources because it runs on a Personal Computer. The same technology can be shown three dimensions’ objects is Virtual Reality to excess can make the user like venturing into the virtual world with Android platform that requires fewer resources. So, put on Virtual Reality technology using the Android platform that can make users not only to view and interact with the tarsiers but also the habitat. The results of this research indicate that the user can learn the Tarsier and habitat with good. Thus, the use of Virtual Reality technology in the e-learning application of tarsiers can help people to see, know, and learn about Spectral Tarsier.
Stereoscopic augmented reality with pseudo-realistic global illumination effects
NASA Astrophysics Data System (ADS)
de Sorbier, Francois; Saito, Hideo
2014-03-01
Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.
Augmented reality social story for autism spectrum disorder
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Arisandi, D.; Lumbanbatu, A. F.; Kemit, L. F.; Nababan, E. B.; Sheta, O.
2018-03-01
Augmented Reality is a technique that can bring social story therapy into virtual world to increase intrinsic motivation of children with Autism Spectrum Disorder(ASD). By looking at the behaviour of ASD who will be difficult to get the focus, the lack of sensory and motor nerves in the use of loads on the hands or other organs will be very distressing children with ASD in doing the right activities, and interpret and understand the social situation in determining a response appropriately. Required method to be able to apply social story on therapy of children with ASD that is implemented with Augmented Reality. The output resulting from this method is 3D animation (three-dimensional animation) of social story by detecting marker located in special book and some simple game which done by using leap motion controller which is useful in reading hand movement in real-time.
ChemPreview: an augmented reality-based molecular interface.
Zheng, Min; Waller, Mark P
2017-05-01
Human computer interfaces make computational science more comprehensible and impactful. Complex 3D structures such as proteins or DNA are magnified by digital representations and displayed on two-dimensional monitors. Augmented reality has recently opened another door to access the virtual three-dimensional world. Herein, we present an augmented reality application called ChemPreview with the potential to manipulate bio-molecular structures at an atomistic level. ChemPreview is available at https://github.com/wallerlab/chem-preview/releases, and is built on top of the Meta 1 platform https://www.metavision.com/. ChemPreview can be used to interact with a protein in an intuitive way using natural hand gestures, thereby making it appealing to computational chemists or structural biologists. The ability to manipulate atoms in real world could eventually provide new and more efficient ways of extracting structural knowledge, or designing new molecules in silico. Copyright © 2017 Elsevier Inc. All rights reserved.
Learning anatomy via mobile augmented reality: Effects on achievement and cognitive load.
Küçük, Sevda; Kapakin, Samet; Göktaş, Yüksel
2016-10-01
Augmented reality (AR), a new generation of technology, has attracted the attention of educators in recent years. In this study, a MagicBook was developed for a neuroanatomy topic by using mobile augmented reality (mAR) technology. This technology integrates virtual learning objects into the real world and allow users to interact with the environment using mobile devices. The purpose of this study was to determine the effects of learning anatomy via mAR on medical students' academic achievement and cognitive load. The mixed method was applied in the study. The random sample consisted of 70 second-year undergraduate medical students: 34 in an experimental group and 36 in a control group. Academic achievement test and cognitive load scale were used as data collection tool. A one-way MANOVA test was used for analysis. The experimental group, which used mAR applications, reported higher achievement and lower cognitive load. The use of mAR applications in anatomy education contributed to the formation of an effective and productive learning environment. Student cognitive load decreased as abstract information became concrete in printed books via multimedia materials in mAR applications. Additionally, students were able to access the materials in the MagicBook anytime and anywhere they wanted. The mobile learning approach helped students learn better by exerting less cognitive effort. Moreover, the sensory experience and real time interaction with environment may provide learning satisfaction and enable students to structure their knowledge to complete the learning tasks. Anat Sci Educ 9: 411-421. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
Urban History in 4 Dimensions - Supporting Research and Education
NASA Astrophysics Data System (ADS)
Münster, S.; Friedrichs, K.; Kröber, C.; Bruschke, J.; Henze, F.; Maiwald, F.; Niebling, F.
2017-08-01
The new research group on the four-dimensional research and communication of urban history (Urban History 4D) aims to investigate and develop methods and technologies to access extensive repositories of historical media and their contextual information in a spatial model, with an additional temporal component. This will make content accessible to different target groups, researchers and the public, via a 4D browser. A location-dependent augmented-reality representation can be used as an information base, research tool, and means of communicating historical knowledge. The data resources for this research include extensive holdings of historical photographs of Dresden, which have documented the city over the decades, and digitized map collections from the Deutsche Fotothek (German photographic collection) platform. These will lay the foundation for a prototype model which will give users a virtual experience of historic parts of Dresden.
ERIC Educational Resources Information Center
Engelke, Christopher Robert
2013-01-01
Technically Speaking: On the Structure and Experience of Interaction Involving Augmentative Alternative Communications examines the ways that communication is structured and experienced by looking at interactions involving augmented communicators--people with severe speech disabilities who use forms of assistive technology in order to communicate…
Visual error augmentation enhances learning in three dimensions.
Sharp, Ian; Huang, Felix; Patton, James
2011-09-02
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.
Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.
Shema-Shiratzky, Shirley; Brozgol, Marina; Cornejo-Thumm, Pablo; Geva-Dayan, Karen; Rotstein, Michael; Leitner, Yael; Hausdorff, Jeffrey M; Mirelman, Anat
2018-05-17
To examine the feasibility and efficacy of a combined motor-cognitive training using virtual reality to enhance behavior, cognitive function and dual-tasking in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Fourteen non-medicated school-aged children with ADHD, received 18 training sessions during 6 weeks. Training included walking on a treadmill while negotiating virtual obstacles. Behavioral symptoms, cognition and gait were tested before and after the training and at 6-weeks follow-up. Based on parental report, there was a significant improvement in children's social problems and psychosomatic behavior after the training. Executive function and memory were improved post-training while attention was unchanged. Gait regularity significantly increased during dual-task walking. Long-term training effects were maintained in memory and executive function. Treadmill-training augmented with virtual-reality is feasible and may be an effective treatment to enhance behavior, cognitive function and dual-tasking in children with ADHD.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691
Virtual reality and robotics for stroke rehabilitation: where do we go from here?
Wade, Eric; Winstein, Carolee J
2011-01-01
Promoting functional recovery after stroke requires collaborative and innovative approaches to neurorehabilitation research. Task-oriented training (TOT) approaches that include challenging, adaptable, and meaningful activities have led to successful outcomes in several large-scale multisite definitive trials. This, along with recent technological advances of virtual reality and robotics, provides a fertile environment for furthering clinical research in neurorehabilitation. Both virtual reality and robotics make use of multimodal sensory interfaces to affect human behavior. In the therapeutic setting, these systems can be used to quantitatively monitor, manipulate, and augment the users' interaction with their environment, with the goal of promoting functional recovery. This article describes recent advances in virtual reality and robotics and the synergy with best clinical practice. Additionally, we describe the promise shown for automated assessments and in-home activity-based interventions. Finally, we propose a broader approach to ensuring that technology-based assessment and intervention complement evidence-based practice and maintain a patient-centered perspective.
Sarver, Nina Wong; Beidel, Deborah C; Spitalnick, Josh S
2014-01-01
Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high-quality program overall. In addition, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Findings indicate that the virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder.
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Wong, Nina; Beidel, Deborah C.; Spitalnick, Josh
2013-01-01
Objective Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Method Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Results Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high quality program overall. Additionally, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Conclusion Virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder. PMID:24144182
ERIC Educational Resources Information Center
Gilbert, Kristen A.
2017-01-01
Aim/Purpose: Improving public schools is a focus of federal legislation in the United States with much of the burden placed on principals. However, preparing principals for this task has proven elusive despite many changes in programming by institutions of higher learning. Emerging technologies that rely on augmented and virtual realities are…
The Game "Pokemon Go" as a Crosscultural Phenomenon
ERIC Educational Resources Information Center
Koroleva, D. O.; Kochervey, A. I.; Nasonova, K. M.; Shibeko, Yu. V.
2016-01-01
The gaming culture of modern childhood takes place not only in real space, but in a virtual one too. These two components (two planes of development) are often viewed in isolation from one another. This study investigates the completely new phenomenon of augmented reality gaming through the lens of "Pokemon Go," and it describes how the…
Learning Protein Structure with Peers in an AR-Enhanced Learning Environment
ERIC Educational Resources Information Center
Chen, Yu-Chien
2013-01-01
Augmented reality (AR) is an interactive system that allows users to interact with virtual objects and the real world at the same time. The purpose of this dissertation was to explore how AR, as a new visualization tool, that can demonstrate spatial relationships by representing three dimensional objects and animations, facilitates students to…
NASA Astrophysics Data System (ADS)
Tang, Qiang; Chen, Yan; Gale, Alastair G.
2017-03-01
Appropriate feedback plays an important role in optimising mammographic interpretation training whilst also ensuring good interpretation performance. The traditional keyboard, mouse and workstation technical approach has a critical limitation in providing supplementary image-related information and providing complex feedback in real time. Augmented Reality (AR) provides a possible superior approach in this situation, as feedback can be provided directly overlaying the displayed mammographic images so making a generic approach which can also be vendor neutral. In this study, radiological feedback was dynamically remapped virtually into the real world, using perspective transformation, in order to provide a richer user experience in mammographic interpretation training. This is an initial attempt of an AR approach to dynamically superimpose pre-defined feedback information of a DICOM image on top of a radiologist's view, whilst the radiologist is examining images on a clinical workstation. The study demonstrates the feasibility of the approach, although there are limitations on interactive operations which are due to the hardware used. The results of this fully functional approach provide appropriate feedback/image correspondence in a simulated mammographic interpretation environment. Thus, it is argued that employing AR is a feasible way to provide rich feedback in the delivery of mammographic interpretation training.
The development of a virtual camera system for astronaut-rover planetary exploration.
Platt, Donald W; Boy, Guy A
2012-01-01
A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.
Virtual reality for health care: a survey.
Moline, J
1997-01-01
This report surveys the state of the art in applications of virtual environments and related technologies for health care. Applications of these technologies are being developed for health care in the following areas: surgical procedures (remote surgery or telepresence, augmented or enhanced surgery, and planning and simulation of procedures before surgery); medical therapy; preventive medicine and patient education; medical education and training; visualization of massive medical databases; skill enhancement and rehabilitation; and architectural design for health-care facilities. To date, such applications have improved the quality of health care, and in the future they will result in substantial cost savings. Tools that respond to the needs of present virtual environment systems are being refined or developed. However, additional large-scale research is necessary in the following areas: user studies, use of robots for telepresence procedures, enhanced system reality, and improved system functionality.
Analysing neutron scattering data using McStas virtual experiments
NASA Astrophysics Data System (ADS)
Udby, L.; Willendrup, P. K.; Knudsen, E.; Niedermayer, Ch.; Filges, U.; Christensen, N. B.; Farhi, E.; Wells, B. O.; Lefmann, K.
2011-04-01
With the intention of developing a new data analysis method using virtual experiments we have built a detailed virtual model of the cold triple-axis spectrometer RITA-II at PSI, Switzerland, using the McStas neutron ray-tracing package. The parameters characterising the virtual instrument were carefully tuned against real experiments. In the present paper we show that virtual experiments reproduce experimentally observed linewidths within 1-3% for a variety of samples. Furthermore we show that the detailed knowledge of the instrumental resolution found from virtual experiments, including sample mosaicity, can be used for quantitative estimates of linewidth broadening resulting from, e.g., finite domain sizes in single-crystal samples.
NASA Astrophysics Data System (ADS)
Sullivan, Sarah; Gnesdilow, Dana; Puntambekar, Sadhana; Kim, Jee-Seon
2017-08-01
Physical and virtual experimentation are thought to have different affordances for supporting students' learning. Research investigating the use of physical and virtual experiments to support students' learning has identified a variety of, sometimes conflicting, outcomes. Unanswered questions remain about how physical and virtual experiments may impact students' learning and for which contexts and content areas they may be most effective. Using a quasi-experimental design, we examined eighth grade students' (N = 100) learning of physics concepts related to pulleys depending on the sequence of physical and virtual labs they engaged in. Five classes of students were assigned to either the: physical first condition (PF) (n = 55), where students performed a physical pulley experiment and then performed the same experiment virtually, or virtual first condition (VF) (n = 45), with the opposite sequence. Repeated measures ANOVA's were conducted to examine how physical and virtual labs impacted students' learning of specific physics concepts. While we did not find clear-cut support that one sequence was better, we did find evidence that participating in virtual experiments may be more beneficial for learning certain physics concepts, such as work and mechanical advantage. Our findings support the idea that if time or physical materials are limited, using virtual experiments may help students understand work and mechanical advantage.
Holographic and light-field imaging for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jong-Young; Jang, Changwon; Jeong, Jinsoo; Lee, Chang-Kun
2017-02-01
We discuss on the recent state of the augmented reality (AR) display technology. In order to realize AR, various seethrough three-dimensional (3D) display techniques have been reported. We describe the AR display with 3D functionality such as light-field display and holography. See-through light-field display can be categorized by the optical elements which are used for see-through property: optical elements controlling path of the light-fields and those generating see-through light-field. Holographic display can be also a good candidate for AR display because it can reconstruct wavefront information and provide realistic virtual information. We introduce the see-through holographic display using various optical techniques.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Speech-Enabled Tools for Augmented Interaction in E-Learning Applications
ERIC Educational Resources Information Center
Selouani, Sid-Ahmed A.; Lê, Tang-Hô; Benahmed, Yacine; O'Shaughnessy, Douglas
2008-01-01
This article presents systems that use speech technology, to emulate the one-on-one interaction a student can get from a virtual instructor. A web-based learning tool, the Learn IN Context (LINC+) system, designed and used in a real mixed-mode learning context for a computer (C++ language) programming course taught at the Université de Moncton…
Augmented Reality, Virtual Reality and Their Effect on Learning Style in the Creative Design Process
ERIC Educational Resources Information Center
Chandrasekera, Tilanka; Yoon, So-Yeon
2018-01-01
Research has shown that user characteristics such as preference for using an interface can result in effective use of the interface. Research has also suggested that there is a relationship between learner preference and creativity. This study uses the VARK learning styles inventory to assess students learning style then explores how this learning…
ERIC Educational Resources Information Center
Schmidt, Matthew; Galyen, Krista; Laffey, James; Babiuch, Ryan; Schmidt, Carla
2014-01-01
Design-based research (DBR) and open source software are both acknowledged as potentially productive ways for advancing learning technologies. These approaches have practical benefits for the design and development process and for building and leveraging community to augment and sustain design and development. This report presents a case study of…
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
Augmented reality: past, present, future
NASA Astrophysics Data System (ADS)
Inzerillo, Laura
2013-03-01
A great opportunity has permitted to carry out a cultural, historical, architectural and social research with great impact factor on the international cultural interest. We are talking about the realization of a museum whose the main theme is the visit and the discovery of a monument of great prestige: the monumental building the "Steri" in Palermo. The museum is divided into sub themes including the one above all, that has aroused the international interest so much that it has been presented the instance to include the museum in the cultural heritage of UNESCO. It is the realization of a museum path that regards the cells of the Inquisition, which are located just inside of some buildings of the monumental building. The project, as a whole, is faced, in a total view, between the various competences implicated: historic, chemic, architectonic, topographic, drawing, representation, virtual communication, informatics. The birth of the museum will be a sum of the results of all these disciplines involved. Methodology, implementation, fruition, virtual museum, goals, 2D graphic restitution, effects on the cultural heritage and landscape environmental, augmented reality, Surveying 2D and 3D, hi-touch screen, Photogrammetric survey, Photographic survey, representation, drawing 3D and more than this has been dealt with this research.
Design and evaluation of an augmented reality simulator using leap motion.
Wright, Trinette; de Ribaupierre, Sandrine; Eagleson, Roy
2017-10-01
Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.
Design and evaluation of an augmented reality simulator using leap motion
de Ribaupierre, Sandrine; Eagleson, Roy
2017-01-01
Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system. PMID:29184667
NASA Johnson Space Center Life Sciences Data System
NASA Technical Reports Server (NTRS)
Rahman, Hasan; Cardenas, Jeffery
1994-01-01
The Life Sciences Project Division (LSPD) at JSC, which manages human life sciences flight experiments for the NASA Life Sciences Division, augmented its Life Sciences Data System (LSDS) in support of the Spacelab Life Sciences-2 (SLS-2) mission, October 1993. The LSDS is a portable ground system supporting Shuttle, Spacelab, and Mir based life sciences experiments. The LSDS supports acquisition, processing, display, and storage of real-time experiment telemetry in a workstation environment. The system may acquire digital or analog data, storing the data in experiment packet format. Data packets from any acquisition source are archived and meta-parameters are derived through the application of mathematical and logical operators. Parameters may be displayed in text and/or graphical form, or output to analog devices. Experiment data packets may be retransmitted through the network interface and database applications may be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control and the LSDS system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability, and ease of use make the LSDS a cost-effective solution to many experiment data processing requirements. The same system is used for experiment systems functional and integration tests, flight crew training sessions and mission simulations. In addition, the system has provided the infrastructure for the development of the JSC Life Sciences Data Archive System scheduled for completion in December 1994.
Virtual experiments in electronics: beyond logistics, budgets, and the art of the possible
NASA Astrophysics Data System (ADS)
Chapman, Brian
1999-09-01
It is common and correct to suppose that computers support flexible delivery of educational resources by offering virtual experiments that replicate and substitute for experiments traditionally offered in conventional teaching laboratories. However, traditional methods are limited by logistics, costs, and what is physically possible to accomplish on a laboratory bench. Virtual experiments allow experimental approaches to teaching and learning to transcend these limits. This paper analyses recent and current developments in educational software for 1st- year physics, 2nd-year electronics engineering and 3rd-year communication engineering, based on three criteria: (1)Is the virtual experiment possible in a real laboratory? (2)How direct is the link between the experimental manipulation and the reinforcement of theoretical learning? (3) What impact might the virtual experiment have on the learner's acquisition of practical measurement skills? Virtual experiments allow more flexibility in the directness of the link between experimental manipulation and the theoretical message. However, increasing the directness of this link may reduce or even abolish the measurement processes associated with traditional experiments. Virtual experiments thus pose educational challenges: (a) expanding the design of experimentally based curricula beyond traditional boundaries and (b) ensuring that the learner acquires sufficient experience in making practical measurements.
NASA Astrophysics Data System (ADS)
Yang, W. B.; Yen, Y. N.; Cheng, H. M.
2015-08-01
The integration of preservation of heritage and the digital technology is an important international trend in the 21st century. The digital technology not only is able to record and preserve detailed documents and information of heritage completely, but also brings the value-added features effectively. In this study, 3D laser scanning is used to perform the digitalized archives for the interior and exterior body work of the building which contains integration of 3D scanner technology, mobile scanning collaboration and multisystem reverse modeling and integration technology. The 3D model is built by combining with multi-media presentations and reversed modeling in real scale to perform the simulation of virtual reality (VR). With interactive teaching and presentation of augmented reality to perform the interaction technology to extend the continuously update in traditional architecture information. With the upgrade of the technology and value-added in digitalization, the cultural asset value can be experienced through 3D virtual reality which makes the information presentation from the traditional reading in the past toward user operation with sensory experience and keep exploring the possibilities and development of cultural asset preservation by using digital technology makes the presentation and learning of cultural asset information toward diversification.
Lin, Yen-Kun; Yau, Hong-Tzong; Wang, I-Chung; Zheng, Cheng; Chung, Kwok-Hung
2015-06-01
Stereoscopic visualization concept combined with head-mounted displays may increase the accuracy of computer-aided implant surgery. The aim of this study was to develop an augmented reality-based dental implant placement system and evaluate the accuracy of the virtually planned versus the actual prepared implant site created in vitro. Four fully edentulous mandibular and four partially edentulous maxillary duplicated casts were used. Six implants were planned in the mandibular and four in the maxillary casts. A total of 40 osteotomy sites were prepared in the casts using stereolithographic template integrated with augmented reality-based surgical simulation. During the surgery, the dentist could be guided accurately through a head-mounted display by superimposing the virtual auxiliary line and the drill stop. The deviation between planned and prepared positions of the implants was measured via postoperative computer tomography generated scan images. Mean and standard deviation of the discrepancy between planned and prepared sites at the entry point, apex, angle, depth, and lateral locations were 0.50 ± 0.33 mm, 0.96 ± 0.36 mm, 2.70 ± 1.55°, 0.33 ± 0.27 mm, and 0.86 ± 0.34 mm, respectively, for the fully edentulous mandible, and 0.46 ± 0.20 mm, 1.23 ± 0.42 mm, 3.33 ± 1.42°, 0.48 ± 0.37 mm, and 1.1 ± 0.39 mm, respectively, for the partially edentulous maxilla. There was a statistically significant difference in the apical deviation between maxilla and mandible in this surgical simulation (p < .05). Deviation of implant placement from planned position was significantly reduced by integrating surgical template and augmented reality technology. © 2013 Wiley Periodicals, Inc.
New tools for sculpting cranial implants in a shared haptic augmented reality environment.
Ai, Zhuming; Evenhouse, Ray; Leigh, Jason; Charbel, Fady; Rasmussen, Mary
2006-01-01
New volumetric tools were developed for the design and fabrication of high quality cranial implants from patient CT data. These virtual tools replace time consuming physical sculpting, mold making and casting steps. The implant is designed by medical professionals in tele-immersive collaboration. Virtual clay is added in the virtual defect area on the CT data using the adding tool. With force feedback the modeler can feel the edge of the defect and fill only the space where no bone is present. A carving tool and a smoothing tool are then used to sculpt and refine the implant. To make a physical evaluation, the skull with simulated defect and the implant are fabricated via stereolithography to allow neurosurgeons to evaluate the quality of the implant. Initial tests demonstrate a very high quality fit. These new haptic volumetric sculpting tools are a critical component of a comprehensive tele-immersive system.
ERIC Educational Resources Information Center
Sullivan, Sarah; Gnesdilow, Dana; Puntambekar, Sadhana; Kim, Jee-Seon
2017-01-01
Physical and virtual experimentation are thought to have different affordances for supporting students' learning. Research investigating the use of physical and virtual experiments to support students' learning has identified a variety of, sometimes conflicting, outcomes. Unanswered questions remain about how physical and virtual experiments may…
Shema, Shirley Roth; Brozgol, Marina; Dorfman, Moran; Maidan, Inbal; Sharaby-Yeshayahu, Lior; Malik-Kozuch, Hila; Wachsler Yannai, Orly; Giladi, Nir; Hausdorff, Jeffrey M; Mirelman, Anat
2014-09-01
Current literature views safe gait as a complex task, relying on motor and cognitive resources. The use of virtual reality (VR) in gait training offers a multifactorial approach, showing positive effects on mobility, balance, and fall risk in elderly people and individuals with neurological disorders. This form of training has been described as a viable research tool; however, it has not been applied routinely in clinical practice. Recently, VR was used to develop an adjunct training method for use by physical therapists in an ambulatory clinical setting. The aim of this article is to describe the initial clinical experience of applying a 5-week VR clinical service to improve gait and mobility in people with a history of falls, poor mobility, or postural instability. A retrospective data analysis was conducted. The clinical records of the first 60 patients who completed the VR gait training program were examined. Training was provided 3 times per week for 5 weeks, with each session lasting approximately 1 hour and consisting of walking on a treadmill while negotiating virtual obstacles. Main outcome measures were compared across time and included the Timed "Up & Go" Test (TUG), the Two-Minute Walk Test (2MWT), and the Four Square Step Test (FSST). After 5 weeks of training, time to complete the TUG decreased by 10.3%, the distance walked during the 2MWT increased by 9.5%, and performance on the FSST improved by 13%. Limitations of the study include the use of a retrospective analysis with no control group and the lack of objective cognitive assessment. Treadmill training with VR appears to be an effective and practical tool that can be applied in an outpatient physical therapy clinic. This training apparently leads to improvements in gait, mobility, and postural control. It, perhaps, also may augment cognitive and functional aspects. © 2014 American Physical Therapy Association.
Augmenting Human Performance in Remotely Piloted Aircraft.
Gruenwald, Christina M; Middendorf, Matthew S; Hoepf, Michael R; Galster, Scott M
2018-02-01
An experiment in a program of research supporting the sense-assess-augment (SAA) framework is described. The objective is to use physiological measures to assess operator cognitive workload in remotely piloted aircraft (RPA) operations, and provide augmentation to assist the operator in times of high workload. In previous experiments, physiological measures were identified that demonstrate sensitivity to changes in workload. The current research solely focuses on the augmentation component of the SAA paradigm. This line of research uses a realistic RPA simulation with varying levels of workload. Recruited from the Midwest region were 12 individuals (6 women) to participate in the experiment. The subjects were trained to perform a surveillance task and a tracking task using RPAs. There was also a secondary task in which subjects were required to answer cognitive probes. A within subjects factorial design was employed with three factors per task. Subjective workload estimates were acquired using the NASA-TLX. Performance data were calculated using a composite scoring algorithm. Augmentation significantly improved performance and reduced workload in both tasks. In the surveillance task, augmentation increased performance from 573.78 to 679.04. Likewise, augmentation increased performance in the tracking task from 749.39 to 791.81. Augmentation was more beneficial in high workload conditions than low workload conditions. The increase in performance and decrease in workload associated with augmentation is an important and anticipated finding. This suggests that augmentation should only be provided when it is truly needed, especially if the augmentation requires additional assets and/or resources.Gruenwald CM, Middendorf MS, Hoepf MR, Galster SM. Augmenting human performance in remotely piloted aircraft. Aerosp Med Hum Perform. 2018; 89(2):115-121.
Virtual surgery in a (tele-)radiology framework.
Glombitza, G; Evers, H; Hassfeld, S; Engelmann, U; Meinzer, H P
1999-09-01
This paper presents telemedicine as an extension of a teleradiology framework through tools for virtual surgery. To classify the described methods and applications, the research field of virtual reality (VR) is broadly reviewed. Differences with respect to technical equipment, methodological requirements and areas of application are pointed out. Desktop VR, augmented reality, and virtual reality are differentiated and discussed in some typical contexts of diagnostic support, surgical planning, therapeutic procedures, simulation and training. Visualization techniques are compared as a prerequisite for virtual reality and assigned to distinct levels of immersion. The advantage of a hybrid visualization kernel is emphasized with respect to the desktop VR applications that are subsequently shown. Moreover, software design aspects are considered by outlining functional openness in the architecture of the host system. Here, a teleradiology workstation was extended by dedicated tools for surgical planning through a plug-in mechanism. Examples of recent areas of application are introduced such as liver tumor resection planning, diagnostic support in heart surgery, and craniofacial surgery planning. In the future, surgical planning systems will become more important. They will benefit from improvements in image acquisition and communication, new image processing approaches, and techniques for data presentation. This will facilitate preoperative planning and intraoperative applications.
Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System.
Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama
2017-01-01
Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one's center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one's individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one's overall performance in balance-related tasks belonging to different difficulty levels.
Interreality: A New Paradigm for E-health.
Riva, Giuseppe
2009-01-01
"Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.
Using voice input and audio feedback to enhance the reality of a virtual experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miner, N.E.
1994-04-01
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less
Impact of virtual microscopy with conventional microscopy on student learning in dental histology.
Hande, Alka Harish; Lohe, Vidya K; Chaudhary, Minal S; Gawande, Madhuri N; Patil, Swati K; Zade, Prajakta R
2017-01-01
In dental histology, the assimilation of histological features of different dental hard and soft tissues is done by conventional microscopy. This traditional method of learning prevents the students from screening the entire slide and change of magnification. To address these drawbacks, modification in conventional microscopy has evolved and become motivation for changing the learning tool. Virtual microscopy is the technique in which there is complete digitization of the microscopic glass slide, which can be analyzed on a computer. This research is designed to evaluate the effectiveness of virtual microscopy with conventional microscopy on student learning in dental histology. A cohort of 105 students were included and randomized into three groups: A, B, and C. Group A students studied the microscopic features of oral histologic lesions by conventional microscopy, Group B by virtual microscopy, and Group C by both conventional and virtual microscopy. The students' understanding of the subject was evaluated by a prepared questionnaire. The effectiveness of the study designs on knowledge gains and satisfaction levels was assessed by statistical assessment of differences in mean test scores. The difference in score between Groups A, B, and C at pre- and post-test was highly significant. This enhanced understanding of the subject may be due to benefits of using virtual microscopy in teaching histology. The augmentation of conventional microscopy with virtual microscopy shows enhancement of the understanding of the subject as compared to the use of conventional microscopy and virtual microscopy alone.
Utilization of a virtual patient for advanced assessment of student performance in pain management.
Smith, Michael A; Waite, Laura H
2017-09-01
To assess student performance and achievement of course objectives following the integration of a virtual patient case designed to promote active, patient-centered learning in a required pharmacy course. DecisionSim™ (Kynectiv, Inc., Chadsford, PA), a dynamic virtual patient platform, was used to implement an interactive patient case to augment pain management material presented during a didactic session in a pharmacotherapy course. Simulation performance data were collected and analyzed. Student exam performance on pain management questions was compared to student exam performance on nearly identical questions from a prior year when a paper-based case was used instead of virtual patient technology. Students who performed well on the virtual patient case performed better on exam questions related to patient assessment (p = 0.0244), primary pharmacological therapy (p = 0.0001), and additional pharmacological therapy (p = 0.0001). Overall exam performance did not differ between the two groups. However, students with exposure to the virtual patient case demonstrated significantly better performance on higher level Bloom's Taxonomy questions that required them to create pharmacotherapy regimens (p=0.0005). Students in the previous year (exposed only to a paper patient case) performed better in calculating conversions of opioids for patients (p = 0.0001). Virtual patient technology may enhance student performance on high-level Bloom's Taxonomy examination questions. This study adds to the current literature demonstrating the value of virtual patient technology as an active-learning strategy. Copyright © 2017 Elsevier Inc. All rights reserved.
Augmented reality in neurosurgery: a systematic review.
Meola, Antonio; Cutolo, Fabrizio; Carbone, Marina; Cagnazzo, Federico; Ferrari, Mauro; Ferrari, Vincenzo
2017-10-01
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms "Augmented reality" and "Neurosurgery." Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.
Streepey, Jefferson W; Kenyon, Robert V; Keshner, Emily A
2007-01-01
We previously reported responses to induced postural instability in young healthy individuals viewing visual motion with a narrow (25 degrees in both directions) and wide (90 degrees and 55 degrees in the horizontal and vertical directions) field of view (FOV) as they stood on different sized blocks. Visual motion was achieved using an immersive virtual environment that moved realistically with head motion (natural motion) and translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). We observed that a subset of the subjects (steppers) could not maintain continuous stance on the smallest block when the virtual environment was in motion. We completed a posteriori analyses on the postural responses of the steppers and non-steppers that may inform us about the mechanisms underlying these differences in stability. We found that when viewing augmented motion with a wide FOV, there was a greater effect on the head and whole body center of mass and ankle angle root mean square (RMS) values of the steppers than of the non-steppers. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers. Whole body COM time lags relative to the augmented visual scene revealed that the time-delay between the scene and the COM was significantly increased in the steppers. The increased responsiveness to visual information suggests a greater visual field-dependency of the steppers and suggests that the thresholds for shifting from a reliance on visual information to somatosensory information can differ even within a healthy population.
A Driving Behaviour Model of Electrical Wheelchair Users
Hamam, Y.; Djouani, K.; Daachi, B.; Steyn, N.
2016-01-01
In spite of the presence of powered wheelchairs, some of the users still experience steering challenges and manoeuvring difficulties that limit their capacity of navigating effectively. For such users, steering support and assistive systems may be very necessary. To appreciate the assistance, there is need that the assistive control is adaptable to the user's steering behaviour. This paper contributes to wheelchair steering improvement by modelling the steering behaviour of powered wheelchair users, for integration into the control system. More precisely, the modelling is based on the improved Directed Potential Field (DPF) method for trajectory planning. The method has facilitated the formulation of a simple behaviour model that is also linear in parameters. To obtain the steering data for parameter identification, seven individuals participated in driving the wheelchair in different virtual worlds on the augmented platform. The obtained data facilitated the estimation of user parameters, using the ordinary least square method, with satisfactory regression analysis results. PMID:27148362
First Experiments with the Tango Tablet for Indoor Scanning
NASA Astrophysics Data System (ADS)
Diakité, Abdoulaye A.; Zlatanova, Sisi
2016-06-01
During the last two decades, the third dimension took an important place in the heart of every multimedia. While the 3D technologies mainly used to be tools and subject for researchers, they are becoming commercially available to large public. To make it even more accessible, the Project Tango, leaded by Google, integrates in a simple Android tablet sensors that are able to perform acquisition of the 3D information of a real life scene. This makes it possible for a large number of applications to have access to it, ranging from gaming to indoor navigation, including virtual and augmented reality. In this paper we investigate the ability of the Tango tablet to perform the acquisition of indoor building environment to support application such as indoor navigation. We proceed to several scans in different buildings and we study the characteristics of the output models.
Botella, Cristina; Pérez-Ara, M Ángeles; Bretón-López, Juana; Quero, Soledad; García-Palacios, Azucena; Baños, Rosa María
2016-01-01
Although in vivo exposure is the treatment of choice for specific phobias, some acceptability problems have been associated with it. Virtual Reality exposure has been shown to be as effective as in vivo exposure, and it is widely accepted for the treatment of specific phobias, but only preliminary data are available in the literature about the efficacy of Augmented Reality. The purpose of the present study was to examine the efficacy and acceptance of two treatment conditions for specific phobias in which the exposure component was applied in different ways: In vivo exposure (N = 31) versus an Augmented Reality system (N = 32) in a randomized controlled trial. "One-session treatment" guidelines were followed. Participants in the Augmented Reality condition significantly improved on all the outcome measures at post-treatment and follow-ups. When the two treatment conditions were compared, some differences were found at post-treatment, favoring the participants who received in vivo exposure. However, these differences disappeared at the 3- and 6-month follow-ups. Regarding participants' expectations and satisfaction with the treatment, very positive ratings were reported in both conditions. In addition, participants from in vivo exposure condition considered the treatment more useful for their problem whereas participants from Augmented Reality exposure considered the treatment less aversive. Results obtained in this study indicate that Augmented Reality exposure is an effective treatment for specific phobias and well accepted by the participants.
Botella, Cristina; Pérez-Ara, M. Ángeles; Bretón-López, Juana; Quero, Soledad; García-Palacios, Azucena; Baños, Rosa María
2016-01-01
Although in vivo exposure is the treatment of choice for specific phobias, some acceptability problems have been associated with it. Virtual Reality exposure has been shown to be as effective as in vivo exposure, and it is widely accepted for the treatment of specific phobias, but only preliminary data are available in the literature about the efficacy of Augmented Reality. The purpose of the present study was to examine the efficacy and acceptance of two treatment conditions for specific phobias in which the exposure component was applied in different ways: In vivo exposure (N = 31) versus an Augmented Reality system (N = 32) in a randomized controlled trial. “One-session treatment” guidelines were followed. Participants in the Augmented Reality condition significantly improved on all the outcome measures at post-treatment and follow-ups. When the two treatment conditions were compared, some differences were found at post-treatment, favoring the participants who received in vivo exposure. However, these differences disappeared at the 3- and 6-month follow-ups. Regarding participants’ expectations and satisfaction with the treatment, very positive ratings were reported in both conditions. In addition, participants from in vivo exposure condition considered the treatment more useful for their problem whereas participants from Augmented Reality exposure considered the treatment less aversive. Results obtained in this study indicate that Augmented Reality exposure is an effective treatment for specific phobias and well accepted by the participants. PMID:26886423
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
Uliano, D; Falciglia, G; Del Viscio, C; Picelli, A; Gandolfi, M; Passarella, A
2010-06-01
Augmentative and alternative communication devices proved to be effective in patients with severe intellectual disability to overcome their communication impairments. In order to give a contribution for design of augmentative and alternative communication systems that better meet the needs of beginning communicators we decided to report our clinical experience about using augmentative and alternative communication in adolescents with severe intellectual disability. Five patients who underwent a long time traditional speech rehabilitation program (at least 5 years) with scant improvements in linguistic function were recruited and evaluated by means of the Vineland Adaptive Behaviour Scale before and after a three years augmentative and alternative communication intervention carried out by a multidisciplinary team. After the rehabilitative intervention patients showed an improvement in communication, daily living skills and socialization as measured by the Vineland Adaptive Behaviour Scale. Augmentative and alternative communication is an effective rehabilitation approach to people with severe intellectual disability and impairments in linguistic expression. Moreover augmentative and alternative communication is a useful tool allowing these patients to increase their social participation also enhancing their self-esteem. Our clinical experience confirmed these topics also in adolescents who underwent a long time traditional speech rehabilitation program with scant improvements, providing practical information to clinicians.
Kjaergaard, Hanne; Foldgast, Anne Maria; Dykes, Anna-Karin
2007-01-01
Background Non-progressive labour is the most common complication in nulliparas and is primarily treated by augmentation. Augmented labour is often terminated by instrumental delivery. Little qualitative research has addressed experiences of non-progressive and augmented deliveries. The aim of this study was to gain a deeper understanding of the experience of non-progressive and augmented labour among nulliparas and their experience of the care they received. Methods A qualitative study was conducted using individual interviews. Data was collected and analysed according to the Grounded Theory method. The participants were a purposive sample of ten women. The interviews were conducted 4–15 weeks after delivery. Results The women had contrasting experiences during the birth process. During labour there was a conflict between the expectation of having a natural delivery and actually having a medical delivery. The women experienced a feeling of separation between mind and body. Interacting with the midwife had a major influence on feelings of losing and regaining control. Reconciliation between the contrasting feelings during labour was achieved. The core category was named Dialectical Birth Process and comprised three categories: Balancing natural and medical delivery, Interacting, Losing and regaining control. Conclusion A dialectical process was identified in these women's experiences of non-progressive labour. The process is susceptible to interaction with the midwife; especially her support to the woman's feeling of being in control. Midwives should secure that the woman's recognition of the fact that the labour is non-progressive and augmentation is required is handled with respect for the dialectical process. Augmentation of labour should be managed as close to the course of natural labour and delivery as possible. PMID:17662152
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm.
Sun, Peng; Chang, Shengqian; Liu, Siqi; Tao, Xiao; Wang, Chang; Zheng, Zhenrong
2018-04-16
In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.
Duan, Liya; Guan, Tao; Yang, Bo
2009-01-01
Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Registration is one of the most difficult problems currently limiting the usability of AR systems. In this paper, we propose a novel natural feature tracking based registration method for AR applications. The proposed method has following advantages: (1) it is simple and efficient, as no man-made markers are needed for both indoor and outdoor AR applications; moreover, it can work with arbitrary geometric shapes including planar, near planar and non planar structures which really enhance the usability of AR systems. (2) Thanks to the reduced SIFT based augmented optical flow tracker, the virtual scene can still be augmented on the specified areas even under the circumstances of occlusion and large changes in viewpoint during the entire process. (3) It is easy to use, because the adaptive classification tree based matching strategy can give us fast and accurate initialization, even when the initial camera is different from the reference image to a large degree. Experimental evaluations validate the performance of the proposed method for online pose tracking and augmentation.
FlyAR: augmented reality supported micro aerial vehicle navigation.
Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard
2014-04-01
Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicles position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the users view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.
Fast Markerless Tracking for Augmented Reality in Planar Environment
NASA Astrophysics Data System (ADS)
Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim
2015-12-01
Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.
NASA Astrophysics Data System (ADS)
Damayanti, Latifah Adelina; Ikhsan, Jaslin
2017-05-01
Integration of information technology in education more rapidly performed in a medium of learning. Three-dimensional (3D) molecular modeling was performed in Augmented Reality as a tangible manifestation of increasingly modern technology utilization. Based on augmented reality, three-dimensional virtual object is projected in real time and the exact environment. This paper reviewed the uses of chemical learning supplement book of aldehydes and ketones which are equipped with three-dimensional molecular modeling by which students can inspect molecules from various viewpoints. To plays the 3D illustration printed on the book, smartphones with the open-source software of the technology based integrated Augmented Reality can be used. The aims of this research were to develop the monograph of aldehydes and ketones with 3 dimensional (3D) illustrations, to determine the specification of the monograph, and to determine the quality of the monograph. The quality of the monograph is evaluated by experiencing chemistry teachers on the five aspects of contents/materials, presentations, language and images, graphs, and software engineering, resulted in the result that the book has a very good quality to be used as a chemistry learning supplement book.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
An Augmented Reality Nanomanipulator for Learning Nanophysics: The "NanoLearner" Platform
NASA Astrophysics Data System (ADS)
Marchi, Florence; Marliere, Sylvain; Florens, Jean Loup; Luciani, Annie; Chevrier, Joel
The work focuses on the description and evaluation of an augmented reality nanomanipulator, called "NanoLearner" platform used as educational tool in practical works of nanophysics. Through virtual reality associated to multisensory renderings, students are immersed in the nanoworld where they can interact in real time with a sample surface or an object, using their senses as hearing, seeing and touching. The role of each sensorial rendering in the understanding and control of the "approach-retract" interaction has been determined thanks to statistical studies obtained during the practical works. Finally, we present two extensions of the use of this innovative tool for investigating nano effects in living organisms and for allowing grand public to have access to a natural understanding of nanophenomena.
Near-to-eye electroholography via guided-wave acousto-optics for augmented reality
NASA Astrophysics Data System (ADS)
Jolly, Sundeep; Savidis, Nickolaos; Datta, Bianca; Smalley, Daniel; Bove, V. Michael
2017-03-01
Near-to-eye holographic displays act to directly project wavefronts into a viewer's eye in order to recreate 3-D scenes for augmented or virtual reality applications. Recently, several solutions for near-to-eye electroholography have been proposed based on digital spatial light modulators in conjunction with supporting optics, such as holographic waveguides for light delivery; however, such schemes are limited by the inherent low space-bandwidth product available with current digital SLMs. In this paper, we depict a fully monolithic, integrated optical platform for transparent near-to-eye holographic display requiring no supporting optics. Our solution employs a guided-wave acousto-optic spatial light modulator implemented in lithium niobate in conjunction with an integrated Bragg-regime reflection volume hologram.
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Goddard, C.
2017-12-01
Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.
Mapping the Terrain: Educational Leadership Field Experiences in K-12 Virtual Schools
ERIC Educational Resources Information Center
LaFrance, Jason A.; Beck, Dennis
2014-01-01
Opportunities for K-12 students to choose virtual and blended learning experiences continue to grow. All 50 states including Washington, D.C., now offer some virtual experience in K-12 education. Of these, 40 states have state virtual schools or state-led online learning initiatives. In addition, federal and state support for this type of learning…
Human computer confluence applied in healthcare and rehabilitation.
Viaud-Delmon, Isabelle; Gaggioli, Andrea; Ferscha, Alois; Dunne, Stephen
2012-01-01
Human computer confluence (HCC) is an ambitious research program studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding. It is an interdisciplinary field, bringing together researches from horizons as various as pervasive computing, bio-signals processing, neuroscience, electronics, robotics, virtual & augmented reality, and provides an amazing potential for applications in medicine and rehabilitation.
ERIC Educational Resources Information Center
Lee, Mark J. W.; Nikolic, Sasha; Vial, Peter J.; Ritz, Christian H.; Li, Wanqing; Goldfinch, Tom
2016-01-01
Project-based learning is a widely used pedagogical strategy in engineering education shown to be effective in fostering problem-solving, design, and teamwork skills. There are distinct benefits to be gained from giving students autonomy in determining the nature and scope of the projects that they wish to undertake, but a lack of expert guidance…
ERIC Educational Resources Information Center
Borrero, A. Mejias; Marquez, J. M. Andujar
2012-01-01
Lab practices are an essential part of teaching in Engineering. However, traditional laboratory lessons developed in classroom labs (CL) must be adapted to teaching and learning strategies that go far beyond the common concept of e-learning, in the sense that completely virtualized distance education disconnects teachers and students from the real…
ERIC Educational Resources Information Center
Chatham-Carpenter, April
2017-01-01
As students begin using more virtual and augmented reality tools for education and entertainment, they may come to expect instructors to use similar types of technology in their teaching (Kelly, 2016). Instructors may choose to employ this type of technology in the future, to create more of a real-life feel in the online classroom. The effects of…
Designing a successful HMD-based experience
NASA Technical Reports Server (NTRS)
Pierce, J. S.; Pausch, R.; Sturgill, C. B.; Christiansen, K. D.; Kaiser, M. K. (Principal Investigator)
1999-01-01
For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.
Virtual experiments: a new approach for improving process conceptualization in hillslope hydrology
NASA Astrophysics Data System (ADS)
Weiler, Markus; McDonnell, Jeff
2004-01-01
We present an approach for process conceptualization in hillslope hydrology. We develop and implement a series of virtual experiments, whereby the interaction between water flow pathways, source and mixing at the hillslope scale is examined within a virtual experiment framework. We define these virtual experiments as 'numerical experiments with a model driven by collective field intelligence'. The virtual experiments explore the first-order controls in hillslope hydrology, where the experimentalist and modeler work together to cooperatively develop and analyze the results. Our hillslope model for the virtual experiments (HillVi) in this paper is based on conceptualizing the water balance within the saturated and unsaturated zone in relation to soil physical properties in a spatially explicit manner at the hillslope scale. We argue that a virtual experiment model needs to be able to capture all major controls on subsurface flow processes that the experimentalist might deem important, while at the same time being simple with few 'tunable parameters'. This combination makes the approach, and the dialog between experimentalist and modeler, a useful hypothesis testing tool. HillVi simulates mass flux for different initial conditions under the same flow conditions. We analyze our results in terms of an artificial line source and isotopic hydrograph separation of water and subsurface flow. Our results for this first set of virtual experiments showed how drainable porosity and soil depth variability exert a first order control on flow and transport at the hillslope scale. We found that high drainable porosity soils resulted in a restricted water table rise, resulting in more pronounced channeling of lateral subsurface flow along the soil-bedrock interface. This in turn resulted in a more anastomosing network of tracer movement across the slope. The virtual isotope hydrograph separation showed higher proportions of event water with increasing drainable porosity. When combined with previous experimental findings and conceptualizations, virtual experiments can be an effective way to isolate certain controls and examine their influence over a range of rainfall and antecedent wetness conditions.
Feasibility study on the readiness, suitability and acceptance of M-Learning AR in learning History
NASA Astrophysics Data System (ADS)
Taharim, Nurwahida Faradila; Lokman, Anitawati Mohd; Hanesh, Amjad; Aziz, Azhar Abd
2016-02-01
There is no doubt that globalization and innovation in technology has led to the use of technology widespread in almost all sectors, including in the field of education. In recent years, the use of technology in the field of education has more widely and rapidly expand worldwide. Integration of technology in education always open to new opportunities where past studies have shown that technology enhances teaching and learning experience. There are various technologies that have been integrated into the various disciplines of education. Augmented Reality (AR) in mobile learning, which allows a combination of real and virtual worlds in a mobile device, is one of the latest technological potential and has been applied in the field of education. The aim of this research work is to mitigate the challenges faced by end users namely students and teachers of history class by means of creating Augmented Reality mobile application to increase the interest in both delivering and receiving the subject matter. The system consists a mobile platform for students to view the content, and a cloud-based engine to deliver the content based on recognized marker. The impact of such system has been tested on both students and teachers, where students showed interest in learning history, while teachers expressed interest to extend and adopt the system school wide.
Training for planning tumour resection: augmented reality and human factors.
Abhari, Kamyar; Baxter, John S H; Chen, Elvis C S; Khan, Ali R; Peters, Terry M; de Ribaupierre, Sandrine; Eagleson, Roy
2015-06-01
Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?
Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.
2010-01-01
This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540
Inquiry style interactive virtual experiments: a case on circular motion
NASA Astrophysics Data System (ADS)
Zhou, Shaona; Han, Jing; Pelz, Nathaniel; Wang, Xiaojun; Peng, Liangyu; Xiao, Hua; Bao, Lei
2011-11-01
Interest in computer-based learning, especially in the use of virtual reality simulations is increasing rapidly. While there are good reasons to believe that technologies have the potential to improve teaching and learning, how to utilize the technology effectively in teaching specific content difficulties is challenging. To help students develop robust understandings of correct physics concepts, we have developed interactive virtual experiment simulations that have the unique feature of enabling students to experience force and motion via an analogue joystick, allowing them to feel the applied force and simultaneously see its effects. The simulations provide students learning experiences that integrate both scientific representations and low-level sensory cues such as haptic cues under a single setting. In this paper, we introduce a virtual experiment module on circular motion. A controlled study has been conducted to evaluate the impact of using this virtual experiment on students' learning of force and motion in the context of circular motion. The results show that the interactive virtual experiment method is preferred by students and is more effective in helping students grasp the physics concepts than the traditional education method such as problem-solving practices. Our research suggests that well-developed interactive virtual experiments can be useful tools in teaching difficult concepts in science.
Seamless 3D interaction for virtual tables, projection planes, and CAVEs
NASA Astrophysics Data System (ADS)
Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III
2000-08-01
The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.
The effects of virtual experience on attitudes toward real brands.
Dobrowolski, Pawel; Pochwatko, Grzegorz; Skorko, Maciek; Bielecki, Maksymilian
2014-02-01
Although the commercial availability and implementation of virtual reality interfaces has seen rapid growth in recent years, little research has been conducted on the potential for virtual reality to affect consumer behavior. One unaddressed issue is how our real world attitudes are affected when we have a virtual experience with the target of those attitudes. This study compared participant (N=60) attitudes toward car brands before and after a virtual test drive of those cars was provided. Results indicated that attitudes toward test brands changed after experience with virtual representations of those brands. Furthermore, manipulation of the quality of this experience (in this case modification of driving difficulty) was reflected in the direction of attitude change. We discuss these results in the context of the associative-propositional evaluation model.
Results of a massive experiment on virtual currency endowments and money demand.
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan; Castronova, Edward
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different "shards" or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy.
Results of a massive experiment on virtual currency endowments and money demand
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different “shards” or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy. PMID:29045494
NASA Astrophysics Data System (ADS)
Tsoupikova, Daria
2006-02-01
This paper will explore how the aesthetics of the virtual world affects, transforms, and enhances the immersive emotional experience of the user. What we see and what we do upon entering the virtual environment influences our feelings, mental state, physiological changes and sensibility. To create a unique virtual experience the important component to design is the beauty of the virtual world based on the aesthetics of the graphical objects such as textures, models, animation, and special effects. The aesthetic potency of the images that comprise the virtual environment can make the immersive experience much stronger and more compelling. The aesthetic qualities of the virtual world as born out through images and graphics can influence the user's state of mind. Particular changes and effects on the user can be induced through the application of techniques derived from the research fields of psychology, anthropology, biology, color theory, education, art therapy, music, and art history. Many contemporary artists and developers derive much inspiration for their work from their experience with traditional arts such as painting, sculpture, design, architecture and music. This knowledge helps them create a higher quality of images and stereo graphics in the virtual world. The understanding of the close relation between the aesthetic quality of the virtual environment and the resulting human perception is the key to developing an impressive virtual experience.
Bergmann, Jeannine; Krewer, Carmen; Bauer, Petra; Koenig, Alexander; Riener, Robert; Müller, Friedemann
2018-06-01
Active performance is crucial for motor learning, and, together with motivation, is believed to be associated with a better rehabilitation outcome. Virtual reality (VR) is an innovative approach to engage and motivate patients during training. There is promising evidence for its efficiency in retraining upper limb function. However, there is insufficient proof for its effectiveness in gait training. To evaluate the acceptability of robot-assisted gait training (RAGT) with and without VR and the feasibility of potential outcome measures to guide the planning of a larger randomized controlled trial (RCT). Single-blind randomized controlled pilot trial with two parallel arms. Rehabilitation hospital. Twenty subacute stroke patients (64±9 years) with a Functional Ambulation Classification (FAC) ≤2. Twelve sessions (over 4 weeks) of either VR-augmented RAGT (intervention group) or standard RAGT (control group). Acceptability of the interventions (drop-out rate, questionnaire), patients' motivation (Intrinsic Motivation Inventory [IMI], individual mean walking time), and feasibility of potential outcome measures (completion rate and response to interventions) were determined. We found high acceptability of repetitive VR-augmented RAGT. The drop-out rate was 1/11 in the intervention and 4/14 in the control group. Patients of the intervention group spent significantly more time walking in the robot than the control group (per session and total walking time; P<0.03). In both groups, motivation measured with the IMI was high over the entire intervention period. The felt pressure and tension significantly decreased in the intervention group (P<0.01) and was significantly lower than in the control group at the last therapy session (r=-0.66, P=0.005). The FAC is suggested as a potential primary outcome measure for a definitive RCT, as it could be assessed in all patients and showed significant response to interventions (P<0.01). We estimated a sample size of 44 for a future RCT. VR-augmented RAGT resulted in high acceptability and motivation, and in a reduced drop-out rate and an extended training time compared to standard RAGT. This pilot trial provides guidance for a prospective RCT on the effectiveness of VR-augmented RAGT. VR might be a promising approach to enrich and improve gait rehabilitation after stroke.
A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note.
Abe, Yuichiro; Sato, Shigenobu; Kato, Koji; Hyakumachi, Takahiko; Yanagibashi, Yasushi; Ito, Manabu; Abumi, Kuniyoshi
2013-10-01
Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate that AR guidance technology can become a useful assistive device during spine surgeries requiring percutaneous procedures.
ERIC Educational Resources Information Center
Jones, Frankie S.
2007-01-01
This qualitative study explored how collaborative technologies influence the informal learning experiences of virtual team members. Inputs revealed as critical to virtual informal learning were integrated, collaborative technological systems; positive relationships and trust; and organizational support and virtual team management. These inputs…
Rothbaum, Barbara Olasov; Price, Matthew; Jovanovic, Tanja; Norrholm, Seth D.; Gerardi, Maryrose; Dunlop, Boadie; Davis, Michael; Bradley, Bekh; Duncan, Erica; Rizzo, Albert “Skip”; Ressler, Kerry J.
2014-01-01
Objective To determine the effectiveness of Virtual Reality Exposure (VRE) augmented with D-cycloserine (50mg) or alprazolam (0.25mg), compared to placebo, in reducing PTSD due to military trauma in Iraq and Afghanistan. Method A double-blind, placebo-controlled randomized clinical trial comparing augmentation methods for VRE for subjects (n= 156) with PTSD was conducted. Results PTSD symptoms significantly improved from pre- to post-treatment over the 6-session VRE treatment (p<.001) across all conditions and were maintained at 3, 6, and 12 months follow-up. There were no overall differences between the D-cycloserine group on symptoms at any time-point. The alprazolam and placebo conditions significantly differed on the post-treatment Clinician Administered PTSD scale (p = .006) and the 3-month post-treatment PTSD diagnosis, such that the alprazolam group showed greater rates of PTSD (79.2% alprazolam vs. 47.8% placebo). Between-session extinction learning was a treatment-specific enhancer of outcome for the D-cycloserine group only (p<.005). At post-treatment, the D-cycloserine group was the lowest on cortisol reactivity (p<.05) and startle response during VR scenes (p<.05). Conclusions A small number of VRE sessions were associated with reduced PTSD diagnosis and symptoms in Iraq/Afghanistan veterans, although there was no control condition for the VRE. Overall, there was no advantage of D-cycloserine on PTSD symptoms in primary analyses. In secondary analyses, benzodiazepine use during treatment may impair recovery, and D-cycloserine may enhance VRE in patients who demonstrate within-session learning. D-cycloserine augmentation treatment in PTSD patients may reduce cortisol and startle reactivity compared to the alprazolam and placebo treatment, consistent with the animal literature. PMID:24743802
Tang, Rui; Ma, Long-Fei; Rong, Zhi-Xia; Li, Mo-Dan; Zeng, Jian-Ping; Wang, Xue-Dong; Liao, Hong-En; Dong, Jia-Hong
2018-04-01
Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods. Copyright © 2018 First Affiliated Hospital, Zhejiang University School of Medicine in China. Published by Elsevier B.V. All rights reserved.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
A novel augmented reality simulator for skills assessment in minimal invasive surgery.
Lahanas, Vasileios; Loukas, Constantinos; Smailis, Nikolaos; Georgiou, Evangelos
2015-08-01
Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training.
Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System
Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama
2018-01-01
Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one’s center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one’s individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one’s overall performance in balance-related tasks belonging to different difficulty levels. PMID:29359128
Embedded Augmented Reality Training System for Dynamic Human-Robot Cooperation
2009-10-01
through (OST) head- mounted displays ( HMDs ) still lack in usability and ergonomics because of their size, weight, resolution, and the hard-to-realize...with addressable focal planes [10], for example. Accurate and easy-to-use calibration routines for OST HMDs remains a challenging task; established...methods are based on matching of virtual over real objects [11], newer approaches use cameras looking directly through the HMD optics to exploit both
Towards surgeon-authored VR training: the scene-development cycle.
Dindar, Saleh; Nguyen, Thien; Peters, Jörg
2016-01-01
Enabling surgeon-educators to themselves create virtual reality (VR) training units promises greater variety, specialization, and relevance of the units. This paper describes a software bridge that semi-automates the scene-generation cycle, a key bottleneck in authoring, modeling, and developing VR units. Augmenting an open source modeling environment with physical behavior attachment and collision specifications yields single-click testing of the full force-feedback enabled anatomical scene.
Skill Transfer and Virtual Training for IND Response Decision-Making: Project Summary and Next Steps
2016-01-01
to be very productive partners—independent video - game developers and academic game degree programs—are not familiar with working with government...experimental validation. • Independent Video - Game Developers. Small companies and individuals that pursue video - game design and development can be inexpensive...Emergency Management Agency (FEMA) that examines alternative mechanisms for training and evaluation of emergency managers (EMs) to augment and