Virtual Reality: Visualization in Three Dimensions.
ERIC Educational Resources Information Center
McLellan, Hilary
Virtual reality is a newly emerging tool for scientific visualization that makes possible multisensory, three-dimensional modeling of scientific data. While the emphasis is on visualization, the other senses are added to enhance what the scientist can visualize. Researchers are working to extend the sensory range of what can be perceived in…
Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
2016-01-01
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Virtual reality in surgery and medicine.
Chinnock, C
1994-01-01
This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time. Surgical robots are likely to be deployed for specific tasks in the operating room (OR) and to support telesurgery applications. Technical developments in robotics and motion control are key components of many virtual reality systems. Since almost all of the virtual reality and enhanced reality systems will be digitally based, they are also capable of being put "on-line" for tele-training, consulting, and even surgery. Advancements in virtual and enhanced reality systems will be driven in part by consumer applications of this technology. Many of the companies that will supply systems for medical applications are also working on commercial products. A big consumer hit can benefit the entire industry by increasing volumes and bringing down costs.(ABSTRACT TRUNCATED AT 400 WORDS)
Can walking motions improve visually induced rotational self-motion illusions in virtual reality?
Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y
2015-02-04
Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.
DOT National Transportation Integrated Search
2015-02-01
Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at : demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, inter...
NASA Astrophysics Data System (ADS)
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
2017-10-01
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
Sensorimotor enhancement with a mixed reality system for balance and mobility rehabilitation.
Fung, Joyce; Perez, Claire F
2011-01-01
We have developed a mixed reality system incorporating virtual reality (VR), surface perturbations and light touch for gait rehabilitation. Haptic touch has emerged as a novel and efficient technique to improve postural control and dynamic stability. Our system combines visual display with the manipulation of physical environments and addition of haptic feedback to enhance balance and mobility post stroke. A research study involving 9 participants with stroke and 9 age-matched healthy individuals show that the haptic cue provided while walking is an effective means of improving gait stability in people post stroke, especially during challenging environmental conditions such as downslope walking.
Shiri, Shimon; Feintuch, Uri; Lorber-Haddad, Adi; Moreh, Elior; Twito, Dvora; Tuchner-Arieli, Maya; Meiner, Zeev
2012-01-01
To introduce the rationale of a novel virtual reality system based on self-face viewing and mirror visual feedback, and to examine its feasibility as a rehabilitation tool for poststroke patients. A novel motion capture virtual reality system integrating online self-face viewing and mirror visual feedback has been developed for stroke rehabilitation.The system allows the replacement of the impaired arm by a virtual arm. Upon making small movements of the paretic arm, patients view themselves virtually performing healthy full-range movements. A sample of 6 patients in the acute poststroke phase received the virtual reality treatment concomitantly with conservative rehabilitation treatment. Feasibility was assessed during 10 sessions for each participant. All participants succeeded in operating the system, demonstrating its feasibility in terms of adherence and improvement in task performance. Patients' performance within the virtual environment and a set of clinical-functional measures recorded before the virtual reality treatment, at 1 week, and after 3 months indicated neurological status and general functioning improvement. These preliminary results indicate that this newly developed virtual reality system is safe and feasible. Future randomized controlled studies are required to assess whether this system has beneficial effects in terms of enhancing upper limb function and quality of life in poststroke patients.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
AUVA - Augmented Reality Empowers Visual Analytics to explore Medical Curriculum Data.
Nifakos, Sokratis; Vaitsis, Christos; Zary, Nabil
2015-01-01
Medical curriculum data play a key role in the structure and the organization of medical programs in Universities around the world. The effective processing and usage of these data may improve the educational environment of medical students. As a consequence, the new generation of health professionals would have improved skills from the previous ones. This study introduces the process of enhancing curriculum data by the use of augmented reality technology as a management and presentation tool. The final goal is to enrich the information presented from a visual analytics approach applied on medical curriculum data and to sustain low levels of complexity of understanding these data.
Telemedicine with mobile devices and augmented reality for early postoperative care.
Ponce, Brent A; Brabston, Eugene W; Shin Zu; Watson, Shawna L; Baker, Dustin; Winn, Dennis; Guthrie, Barton L; Shenai, Mahesh B
2016-08-01
Advanced features are being added to telemedicine paradigms to enhance usability and usefulness. Virtual Interactive Presence (VIP) is a technology that allows a surgeon and patient to interact in a "merged reality" space, to facilitate both verbal, visual, and manual interaction. In this clinical study, a mobile VIP iOS application was introduced into routine post-operative orthopedic and neurosurgical care. Survey responses endorse the usefulness of this tool, as it relates to The virtual interaction provides needed virtual follow-up in instances where in-person follow-up may be limited, and enhances the subjective patient experience.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Cuevas-Esteban, Jorge; Cambra-Martí, Maria Rosa; Ochoa, Susana; Brébion, Gildas
2017-09-01
Previous research suggests that visual hallucinations in schizophrenia consist of mental images mistaken for percepts due to failure of the reality-monitoring processes. However, the neural substrates that underpin such dysfunction are currently unknown. We conducted a brain imaging study to investigate the role of visual mental imagery in visual hallucinations. Twenty-three patients with schizophrenia and 26 healthy participants were administered a reality-monitoring task whilst undergoing an fMRI protocol. At the encoding phase, a mixture of pictures of common items and labels designating common items were presented. On the memory test, participants were requested to remember whether a picture of the item had been presented or merely its label. Visual hallucination scores were associated with a liberal response bias reflecting propensity to erroneously remember pictures of the items that had in fact been presented as words. At encoding, patients with visual hallucinations differentially activated the right fusiform gyrus when processing the words they later remembered as pictures, which suggests the formation of visual mental images. On the memory test, the whole patient group activated the anterior cingulate and medial superior frontal gyrus when falsely remembering pictures. However, no differential activation was observed in patients with visual hallucinations, whereas in the healthy sample, the production of visual mental images at encoding led to greater activation of a fronto-parietal decisional network on the memory test. Visual hallucinations are associated with enhanced visual imagery and possibly with a failure of the reality-monitoring processes that enable discrimination between imagined and perceived events. Copyright © 2017 Elsevier Ltd. All rights reserved.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
Wagner, A; Ploder, O; Enislidis, G; Truppe, M; Ewers, R
1996-04-01
Interventional video tomography (IVT), a new imaging modality, achieves virtual visualization of anatomic structures in three dimensions for intraoperative stereotactic navigation. Partial immersion into a virtual data space, which is orthotopically coregistered to the surgical field, enhances, by means of a see-through head-mounted display (HMD), the surgeon's visual perception and technique by providing visual access to nonvisual data of anatomy, physiology, and function. The presented cases document the potential of augmented reality environments in maxillofacial surgery.
Grewe, P; Lahr, D; Kohsik, A; Dyck, E; Markowitsch, H J; Bien, C G; Botsch, M; Piefke, M
2014-02-01
Ecological assessment and training of real-life cognitive functions such as visual-spatial abilities in patients with epilepsy remain challenging. Some studies have applied virtual reality (VR) paradigms, but external validity of VR programs has not sufficiently been proven. Patients with focal epilepsy (EG, n=14) accomplished an 8-day program in a VR supermarket, which consisted of learning and buying items on a shopping list. Performance of the EG was compared with that of healthy controls (HCG, n=19). A comprehensive neuropsychological examination was administered. Real-life performance was investigated in a real supermarket. Learning in the VR supermarket was significantly impaired in the EG on different VR measures. Delayed free recall of products did not differ between the EG and the HCG. Virtual reality scores were correlated with neuropsychological measures of visual-spatial cognition, subjective estimates of memory, and performance in the real supermarket. The data indicate that our VR approach allows for the assessment of real-life visual-spatial memory and cognition in patients with focal epilepsy. The multimodal, active, and complex VR paradigm may particularly enhance visual-spatial cognitive resources. Copyright © 2013 Elsevier Inc. All rights reserved.
An Augmented-Reality Edge Enhancement Application for Google Glass
Hwang, Alex D.; Peli, Eli
2014-01-01
Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Visual error augmentation enhances learning in three dimensions.
Sharp, Ian; Huang, Felix; Patton, James
2011-09-02
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
Enhanced Reality Visualization in a Surgical Environment.
1995-01-13
perhaps by a significant amount fol- lowing calibration. Examples of these methods include [Brown, 1965, Lenz and Tsai, 1988, Maybank and Faugeras...Analysis And Machine In- telligence, 10(5):713-720, September 1988. [ Maybank and Faugeras, 1992] Stephen J. Maybank and Olivier D. Faugeras. A
Recent Development of Augmented Reality in Surgery: A Review.
Vávra, P; Roman, J; Zonča, P; Ihnát, P; Němec, M; Kumar, J; Habib, N; El-Gendi, A
2017-01-01
The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms "augmented reality" and "surgery." Results . The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.
DVV: a taxonomy for mixed reality visualization in image guided surgery.
Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis
2012-02-01
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul
2014-06-01
3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.
Visual field examination method using virtual reality glasses compared with the Humphrey perimeter.
Tsapakis, Stylianos; Papaconstantinou, Dimitrios; Diagourtas, Andreas; Droutsas, Konstantinos; Andreanos, Konstantinos; Moschos, Marilita M; Brouzas, Dimitrios
2017-01-01
To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter. Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points) were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter. High correlation coefficient ( r =0.808, P <0.0001) was found between the virtual reality visual field test and the Humphrey perimeter visual field. Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use.
Virtual reality for health care: a survey.
Moline, J
1997-01-01
This report surveys the state of the art in applications of virtual environments and related technologies for health care. Applications of these technologies are being developed for health care in the following areas: surgical procedures (remote surgery or telepresence, augmented or enhanced surgery, and planning and simulation of procedures before surgery); medical therapy; preventive medicine and patient education; medical education and training; visualization of massive medical databases; skill enhancement and rehabilitation; and architectural design for health-care facilities. To date, such applications have improved the quality of health care, and in the future they will result in substantial cost savings. Tools that respond to the needs of present virtual environment systems are being refined or developed. However, additional large-scale research is necessary in the following areas: user studies, use of robots for telepresence procedures, enhanced system reality, and improved system functionality.
Parsons, Thomas D.
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869
Parsons, Thomas D
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.
Hybrid Reality Lab Capabilities - Video 2
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2016-01-01
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.
Zenner, Andre; Kruger, Antonio
2017-04-01
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
Immersive Earth Science: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.; Ramirez-Linan, R.
2017-12-01
Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.
Enhancing a Multi-body Mechanism with Learning-Aided Cues in an Augmented Reality Environment
NASA Astrophysics Data System (ADS)
Singh Sidhu, Manjit
2013-06-01
Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.
Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality
NASA Astrophysics Data System (ADS)
Cherukuru, Nihanth Wagmi
Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.
Recent Development of Augmented Reality in Surgery: A Review
Vávra, P.; Zonča, P.; Ihnát, P.; El-Gendi, A.
2017-01-01
Introduction The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice. PMID:29065604
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Learning Protein Structure with Peers in an AR-Enhanced Learning Environment
ERIC Educational Resources Information Center
Chen, Yu-Chien
2013-01-01
Augmented reality (AR) is an interactive system that allows users to interact with virtual objects and the real world at the same time. The purpose of this dissertation was to explore how AR, as a new visualization tool, that can demonstrate spatial relationships by representing three dimensional objects and animations, facilitates students to…
Enhancing Tele-robotics with Immersive Virtual Reality
2017-11-03
graduate and undergraduate students within the Digital Gaming and Simulation, Computer Science, and psychology programs have actively collaborated...investigates the use of artificial intelligence and visual computing. Numerous fields across the human-computer interaction and gaming research areas...invested in digital gaming and simulation to cognitively stimulate humans by computers, forming a $10.5B industry [1]. On the other hand, cognitive
Visualizing Compound Rotations with Virtual Reality
ERIC Educational Resources Information Center
Flanders, Megan; Kavanagh, Richard C.
2013-01-01
Mental rotations are among the most difficult of all spatial tasks to perform, and even those with high levels of spatial ability can struggle to visualize the result of compound rotations. This pilot study investigates the use of the virtual reality-based Rotation Tool, created using the Virtual Reality Modeling Language (VRML) together with…
Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023
Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.
ERIC Educational Resources Information Center
Massey, Carissa
2017-01-01
Through an examination of the visual rhetoric of identity presented by reality shows, especially "Here Comes Honey Boo Boo," this paper explores ways in which American reality television and related media images construct, deploy, and reiterate visual stereotypes about whites from rural regions of the United States. Its focus is the…
Are Spatial Visualization Abilities Relevant to Virtual Reality?
ERIC Educational Resources Information Center
Chen, Chwen Jen
2006-01-01
This study aims to investigate the effects of virtual reality (VR)-based learning environment on learners of different spatial visualization abilities. The findings of the aptitude-by-treatment interaction study have shown that learners benefit most from the Guided VR mode, irrespective of their spatial visualization abilities. This indicates that…
NASA Astrophysics Data System (ADS)
Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.
2015-12-01
Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.
NASA Astrophysics Data System (ADS)
Demir, I.
2015-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-01-01
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-05-05
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
Belle2VR: A Virtual-Reality Visualization of Subatomic Particle Physics in the Belle II Experiment.
Duer, Zach; Piilonen, Leo; Glasson, George
2018-05-01
Belle2VR is an interactive virtual-reality visualization of subatomic particle physics, designed by an interdisciplinary team as an educational tool for learning about and exploring subatomic particle collisions. This article describes the tool, discusses visualization design decisions, and outlines our process for collaborative development.
An acceptance model for smart glasses based tourism augmented reality
NASA Astrophysics Data System (ADS)
Obeidy, Waqas Khalid; Arshad, Haslina; Huang, Jiung Yao
2017-10-01
Recent mobile technologies have revolutionized the way people experience their environment. Although, there is only limited research on users' acceptance of AR in the cultural tourism context, previous researchers have explored the opportunities of using augmented reality (AR) in order to enhance user experience. Recent AR research lack works that integrates dimensions which are specific to cultural tourism and smart glass specific context. Hence, this work proposes an AR acceptance model in the context of cultural heritage tourism and smart glasses capable of performing augmented reality. Therefore, in this paper we aim to present an AR acceptance model to understand the AR usage behavior and visiting intention for tourists who use Smart Glass based AR at UNESCO cultural heritage destinations in Malaysia. Furthermore, this paper identifies information quality, technology readiness, visual appeal, and facilitating conditions as external variables and key factors influencing visitors' beliefs, attitudes and usage intention.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Kim, Hyungil; Gabbard, Joseph L; Anon, Alexandre Miranda; Misu, Teruhisa
2018-04-01
This article investigates the effects of visual warning presentation methods on human performance in augmented reality (AR) driving. An experimental user study was conducted in a parking lot where participants drove a test vehicle while braking for any cross traffic with assistance from AR visual warnings presented on a monoscopic and volumetric head-up display (HUD). Results showed that monoscopic displays can be as effective as volumetric displays for human performance in AR braking tasks. The experiment also demonstrated the benefits of conformal graphics, which are tightly integrated into the real world, such as their ability to guide drivers' attention and their positive consequences on driver behavior and performance. These findings suggest that conformal graphics presented via monoscopic HUDs can enhance driver performance by leveraging the effectiveness of monocular depth cues. The proposed approaches and methods can be used and further developed by future researchers and practitioners to better understand driver performance in AR as well as inform usability evaluation of future automotive AR applications.
[Parallel virtual reality visualization of extreme large medical datasets].
Tang, Min
2010-04-01
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
On the use of Augmented Reality techniques in learning and interpretation of cardiologic data.
Lamounier, Edgard; Bucioli, Arthur; Cardoso, Alexandre; Andrade, Adriano; Soares, Alcimar
2010-01-01
Augmented Reality is a technology which provides people with more intuitive ways of interaction and visualization, close to those in real world. The amount of applications using Augmented Reality is growing every day, and results can be already seen in several fields such as Education, Training, Entertainment and Medicine. The system proposed in this article intends to provide a friendly and intuitive interface based on Augmented Reality for heart beating evaluation and visualization. Cardiologic data is loaded from several distinct sources: simple standards of heart beating frequencies (for example situations like running or sleeping), files of heart beating signals, scanned electrocardiographs and real time data acquisition of patient's heart beating. All this data is processed to produce visualization within Augmented Reality environments. The results obtained in this research have shown that the developed system is able to simplify the understanding of concepts about heart beating and its functioning. Furthermore, the system can help health professionals in the task of retrieving, processing and converting data from all the sources handled by the system, with the support of an edition and visualization mode.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Gerber, Stephan M; Jeitziner, Marie-Madlen; Wyss, Patric; Chesham, Alvin; Urwyler, Prabitha; Müri, René M; Jakob, Stephan M; Nef, Tobias
2017-10-16
After prolonged stay in an intensive care unit (ICU) patients often complain about cognitive impairments that affect health-related quality of life after discharge. The aim of this proof-of-concept study was to test the feasibility and effects of controlled visual and acoustic stimulation in a virtual reality (VR) setup in the ICU. The VR setup consisted of a head-mounted display in combination with an eye tracker and sensors to assess vital signs. The stimulation consisted of videos featuring natural scenes and was tested in 37 healthy participants in the ICU. The VR stimulation led to a reduction of heart rate (p = 0. 049) and blood pressure (p = 0.044). Fixation/saccade ratio (p < 0.001) was increased when a visual target was presented superimposed on the videos (reduced search activity), reflecting enhanced visual processing. Overall, the VR stimulation had a relaxing effect as shown in vital markers of physical stress and participants explored less when attending the target. Our study indicates that VR stimulation in ICU settings is feasible and beneficial for critically ill patients.
Fused Reality for Enhanced Flight Test Capabilities
NASA Technical Reports Server (NTRS)
Bachelder, Ed; Klyde, David
2011-01-01
The feasibility of using Fused Reality-based simulation technology to enhance flight test capabilities has been investigated. In terms of relevancy to piloted evaluation, there remains no substitute for actual flight tests, even when considering the fidelity and effectiveness of modern ground-based simulators. In addition to real-world cueing (vestibular, visual, aural, environmental, etc.), flight tests provide subtle but key intangibles that cannot be duplicated in a ground-based simulator. There is, however, a cost to be paid for the benefits of flight in terms of budget, mission complexity, and safety, including the need for ground and control-room personnel, additional aircraft, etc. A Fused Reality(tm) (FR) Flight system was developed that allows a virtual environment to be integrated with the test aircraft so that tasks such as aerial refueling, formation flying, or approach and landing can be accomplished without additional aircraft resources or the risk of operating in close proximity to the ground or other aircraft. Furthermore, the dynamic motions of the simulated objects can be directly correlated with the responses of the test aircraft. The FR Flight system will allow real-time observation of, and manual interaction with, the cockpit environment that serves as a frame for the virtual out-the-window scene.
Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A; Karim, Naz; Merck, Derek L
2018-01-01
Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients' de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based "blind insertion" invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner's AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage applications for invasive procedure training that featured innovative AR/MR techniques on off-the-shelf headset devices.
Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training
Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A.; Karim, Naz; Merck, Derek L.
2018-01-01
Introduction Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage applications for invasive procedure training that featured innovative AR/MR techniques on off-the-shelf headset devices. PMID:29383074
McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.
2005-01-01
The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092
Enhancements to VTK enabling Scientific Visualization in Immersive Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish
Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR)more » environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.« less
Visual hallucinations in schizophrenia: confusion between imagination and perception.
Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S
2008-05-01
An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.
Aharon, S; Robb, R A
1997-01-01
Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.
1992-03-01
the Services or "What are the Research Issues in the use of Virtual Reality in Training?" 173 Visual Communication In Multi-Media Virtual Realities...This basic research project in visual communication examines how visual knowledge should be structured to take full advantage of advanced computer...theoretical framework to begin to analyze the comparative strengths of speech communication versus visual communication in the exchange of shared mental
Curriculum: Managed Visual Reality.
ERIC Educational Resources Information Center
Gueulette, David G.
This paper explores the association between the symbolized and the actualized, beginning with the prehistoric notion of a "reality double," in which no practical difference exists between pictorial representations, visual symbols, and real-life events and situations. Alchemists of the Middle Ages, with their paradoxical vision of the…
Head-mounted spatial instruments II: Synthetic reality or impossible dream
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Grunwald, Arthur
1989-01-01
A spatial instrument is defined as a spatial display which has been either geometrically or symbolically enhanced to enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. The design of spatial instruments may not only require the introduction of compensatory distortions to remove the naturally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. However, these image manipulations can cause a loss of visual-vestibular coordination and induce motion sickness. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Virtual reality method to analyze visual recognition in mice.
Young, Brent Kevin; Brennan, Jayden Nicole; Wang, Ping; Tian, Ning
2018-01-01
Behavioral tests have been extensively used to measure the visual function of mice. To determine how precisely mice perceive certain visual cues, it is necessary to have a quantifiable measurement of their behavioral responses. Recently, virtual reality tests have been utilized for a variety of purposes, from analyzing hippocampal cell functionality to identifying visual acuity. Despite the widespread use of these tests, the training requirement for the recognition of a variety of different visual targets, and the performance of the behavioral tests has not been thoroughly characterized. We have developed a virtual reality behavior testing approach that can essay a variety of different aspects of visual perception, including color/luminance and motion detection. When tested for the ability to detect a color/luminance target or a moving target, mice were able to discern the designated target after 9 days of continuous training. However, the quality of their performance is significantly affected by the complexity of the visual target, and their ability to navigate on a spherical treadmill. Importantly, mice retained memory of their visual recognition for at least three weeks after the end of their behavioral training.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
Virtual Reality: An Instructional Medium for Visual-Spatial Tasks.
ERIC Educational Resources Information Center
Regian, J. Wesley; And Others
1992-01-01
Describes an empirical exploration of the instructional potential of virtual reality as an interface for simulation-based training. Shows that subjects learned spatial-procedural and spatial-navigational skills in virtual reality. (SR)
Stereoscopic augmented reality for laparoscopic surgery.
Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj
2014-07-01
Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.
Transduction between worlds: using virtual and mixed reality for earth and planetary science
NASA Astrophysics Data System (ADS)
Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.
2017-12-01
Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.
Applied virtual reality at the Research Triangle Institute
NASA Technical Reports Server (NTRS)
Montoya, R. Jorge
1994-01-01
Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.
Prasad, M S Raghu; Manivannan, Muniyandi; Manoharan, Govindan; Chandramohan, S M
2016-01-01
Most of the commercially available virtual reality-based laparoscopic simulators do not effectively evaluate combined psychomotor and force-based laparoscopic skills. Consequently, the lack of training on these critical skills leads to intraoperative errors. To assess the effectiveness of the novel virtual reality-based simulator, this study analyzed the combined psychomotor (i.e., motion or movement) and force skills of residents and expert surgeons. The study also examined the effectiveness of real-time visual force feedback and tool motion during training. Bimanual fundamental (i.e., probing, pulling, sweeping, grasping, and twisting) and complex tasks (i.e., tissue dissection) were evaluated. In both tasks, visual feedback on applied force and tool motion were provided. The skills of the participants while performing the early tasks were assessed with and without visual feedback. Participants performed 5 repetitions of fundamental and complex tasks. Reaction force and instrument acceleration were used as metrics. Surgical Gastroenterology, Government Stanley Medical College and Hospital; Institute of Surgical Gastroenterology, Madras Medical College and Rajiv Gandhi Government General Hospital. Residents (N = 25; postgraduates and surgeons with <2 years of laparoscopic surgery) and expert surgeons (N = 25; surgeons with >4 and ≤10 years of laparoscopic surgery). Residents applied large forces compared with expert surgeons and performed abrupt tool movements (p < 0.001). However, visual + haptic feedback improved the performance of residents (p < 0.001). In complex tasks, visual + haptic feedback did not influence the applied force of expert surgeons, but influenced their tool motion (p < 0.001). Furthermore, in complex tissue sweeping task, expert surgeons applied more force, but were within the tissue damage limits. In both groups, exertion of large forces and abrupt tool motion were observed during grasping, probing or pulling, and tissue sweeping maneuvers (p < 0.001). Modern day curriculum-based training should evaluate the skills of residents with robust force and psychomotor-based exercises for proficient laparoscopy. Visual feedback on force and motion during training has the potential to enhance the learning curve of residents. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Kiryu, Tohru; So, Richard H Y
2007-09-25
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.
Kiryu, Tohru; So, Richard HY
2007-01-01
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution. PMID:17894857
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.
Wright, W Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Virtual reality training improves balance function.
Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng
2014-09-01
Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function.
Virtual reality training improves balance function
Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng
2014-01-01
Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function. PMID:25368651
Advanced helmet mounted display (AHMD)
NASA Astrophysics Data System (ADS)
Sisodia, Ashok; Bayer, Michael; Townley-Smith, Paul; Nash, Brian; Little, Jay; Cassarly, William; Gupta, Anurag
2007-04-01
Due to significantly increased U.S. military involvement in deterrent, observer, security, peacekeeping and combat roles around the world, the military expects significant future growth in the demand for deployable virtual reality trainers with networked simulation capability of the battle space visualization process. The use of HMD technology in simulated virtual environments has been initiated by the demand for more effective training tools. The AHMD overlays computer-generated data (symbology, synthetic imagery, enhanced imagery) augmented with actual and simulated visible environment. The AHMD can be used to support deployable reconfigurable training solutions as well as traditional simulation requirements, UAV augmented reality, air traffic control and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications. This paper will describe the design improvements implemented for production of the AHMD System.
ERIC Educational Resources Information Center
Ausburn, Lynna J.; Ausburn, Floyd B.; Kroutter, Paul
2010-01-01
Virtual reality (VR) technology has demonstrated effectiveness in a variety of technical learning situations, yet little is known about its differential effects on learners with different levels of visual processing skill. This small-scale exploratory study tested VR through quasi-experimental methodology and a theoretical/conceptual framework…
Motion magnification for endoscopic surgery
NASA Astrophysics Data System (ADS)
McLeod, A. Jonathan; Baxter, John S. H.; de Ribaupierre, Sandrine; Peters, Terry M.
2014-03-01
Endoscopic and laparoscopic surgeries are used for many minimally invasive procedures but limit the visual and haptic feedback available to the surgeon. This can make vessel sparing procedures particularly challenging to perform. Previous approaches have focused on hardware intensive intraoperative imaging or augmented reality systems that are difficult to integrate into the operating room. This paper presents a simple approach in which motion is visually enhanced in the endoscopic video to reveal pulsating arteries. This is accomplished by amplifying subtle, periodic changes in intensity coinciding with the patient's pulse. This method is then applied to two procedures to illustrate its potential. The first, endoscopic third ventriculostomy, is a neurosurgical procedure where the floor of the third ventricle must be fenestrated without injury to the basilar artery. The second, nerve-sparing robotic prostatectomy, involves removing the prostate while limiting damage to the neurovascular bundles. In both procedures, motion magnification can enhance subtle pulsation in these structures to aid in identifying and avoiding them.
Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey
ERIC Educational Resources Information Center
Yellowlees, Peter M.; Cook, James N.
2006-01-01
Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…
Augmented reality for breast imaging.
Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio
2018-06-01
Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. Gadolinium was injected as a contrast agent (0.1 mmol/kg at 2 mL/s) using a programmable power injector. Dicom formatted images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into augmented reality images. ABI demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. ABI can improve clinical outcomes, providing an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, Birchard P; Michel, Kelly D; Few, Douglas A
From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
NASA Astrophysics Data System (ADS)
Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.
2011-12-01
Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.
ERIC Educational Resources Information Center
Kartiko, Iwan; Kavakli, Manolya; Cheng, Ken
2010-01-01
As the technology in computer graphics advances, Animated-Virtual Actors (AVAs) in Virtual Reality (VR) applications become increasingly rich and complex. Cognitive Theory of Multimedia Learning (CTML) suggests that complex visual materials could hinder novice learners from attending to the lesson properly. On the other hand, previous studies have…
From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy
ERIC Educational Resources Information Center
Jang, Susan
2010-01-01
This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…
PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices
ERIC Educational Resources Information Center
Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões
2013-01-01
This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…
ERIC Educational Resources Information Center
Jeyaraj, Joseph
2017-01-01
Engineers communicate multimodally using written and visual communication, but there is not much theorizing on why they do so and how. This essay, therefore, examines why engineers communicate multimodally, what, in the context of representing engineering realities, are the strengths and weaknesses of written and visual communication, and how,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouhard, Margaret MEG G; Steed, Chad A; Hahn, Steven E
In this paper, we propose strategies and objectives for immersive data visualization with applications in materials science using the Oculus Rift virtual reality headset. We provide background on currently available analysis tools for neutron scattering data and other large-scale materials science projects. In the context of the current challenges facing scientists, we discuss immersive virtual reality visualization as a potentially powerful solution. We introduce a prototype immersive visual- ization system, developed in conjunction with materials scientists at the Spallation Neutron Source, which we have used to explore large crystal structures and neutron scattering data. Finally, we offer our perspective onmore » the greatest challenges that must be addressed to build effective and intuitive virtual reality analysis tools that will be useful for scientists in a wide range of fields.« less
NASA Astrophysics Data System (ADS)
Demir, I.
2014-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.
A novel augmented reality system of image projection for image-guided neurosurgery.
Mahvash, Mehran; Besharati Tabrizi, Leila
2013-05-01
Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram
2018-03-01
We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Virtual Reality as an Educational and Training Tool for Medicine.
Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo
2018-02-01
Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.
From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy
NASA Astrophysics Data System (ADS)
Jang, Susan
This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity that took place among the manipulation group promoted visual and motoric embodiment, which in turn enhanced learning. Moreover, when accounting for spatial ability, it was found that manipulation benefits students with low spatial ability more than students with high spatial ability.
Effect of virtual reality on cognition in stroke patients.
Kim, Bo Ryun; Chun, Min Ho; Kim, Lee Suk; Park, Ji Young
2011-08-01
To investigate the effect of virtual reality on the recovery of cognitive impairment in stroke patients. Twenty-eight patients (11 males and 17 females, mean age 64.2) with cognitive impairment following stroke were recruited for this study. All patients were randomly assigned to one of two groups, the virtual reality (VR) group (n=15) or the control group (n=13). The VR group received both virtual reality training and computer-based cognitive rehabilitation, whereas the control group received only computer-based cognitive rehabilitation. To measure, activity of daily living cognitive and motor functions, the following assessment tools were used: computerized neuropsychological test and the Tower of London (TOL) test for cognitive function assessment, Korean-Modified Barthel index (K-MBI) for functional status evaluation, and the motricity index (MI) for motor function assessment. All recruited patients underwent these evaluations before rehabilitation and four weeks after rehabilitation. The VR group showed significant improvement in the K-MMSE, visual and auditory continuous performance tests (CPT), forward digit span test (DST), forward and backward visual span tests (VST), visual and verbal learning tests, TOL, K-MBI, and MI scores, while the control group showed significant improvement in the K-MMSE, forward DST, visual and verbal learning tests, trail-making test-type A, TOL, K-MBI, and MI scores after rehabilitation. The changes in the visual CPT and backward VST in the VR group after rehabilitation were significantly higher than those in the control group. Our findings suggest that virtual reality training combined with computer-based cognitive rehabilitation may be of additional benefit for treating cognitive impairment in stroke patients.
1998-03-01
Research Laboratory’s Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space...and describe the lessons which have been learned. Interactive graphics, workbench, battle space visualization, virtual reality , user interface.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
NASA Technical Reports Server (NTRS)
2000-01-01
Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.
Virtual reality stimuli for force platform posturography.
Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko
2002-01-01
People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.
Display technologies for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang
2018-02-01
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Augmented reality in medical education?
Kamphuis, Carolien; Barsom, Esther; Schijven, Marlies; Christoph, Noor
2014-09-01
Learning in the medical domain is to a large extent workplace learning and involves mastery of complex skills that require performance up to professional standards in the work environment. Since training in this real-life context is not always possible for reasons of safety, costs, or didactics, alternative ways are needed to achieve clinical excellence. Educational technology and more specifically augmented reality (AR) has the potential to offer a highly realistic situated learning experience supportive of complex medical learning and transfer. AR is a technology that adds virtual content to the physical real world, thereby augmenting the perception of reality. Three examples of dedicated AR learning environments for the medical domain are described. Five types of research questions are identified that may guide empirical research into the effects of these learning environments. Up to now, empirical research mainly appears to focus on the development, usability and initial implementation of AR for learning. Limited review results reflect the motivational value of AR, its potential for training psychomotor skills and the capacity to visualize the invisible, possibly leading to enhanced conceptual understanding of complex causality.
López-Martín, Olga; Segura Fragoso, Antonio; Rodríguez Hernández, Marta; Dimbwadyo Terrer, Iris; Polonio-López, Begoña
2016-01-01
To evaluate the effectiveness of a programme based on a virtual reality game to improve cognitive domains in patients with schizophrenia. A randomized controlled trial was conducted in 40 patients with schizophrenia, 20 in the experimental group and 20 in the control group. The experimental group received 10 sessions with Nintendo Wii(®) for 5 weeks, 50 minutes/session, 2 days/week in addition to conventional treatment. The control group received conventional treatment only. Statistically significant differences in the T-Score were found in 5 of the 6 cognitive domains assessed: processing speed (F=12.04, p=0.001), attention/vigilance (F=12.75, p=0.001), working memory (F=18.86, p <0.01), verbal learning (F=7.6, p=0.009), visual learning (F=3.6, p=0.064), and reasoning and problem solving (F=11.08, p=0.002). Participation in virtual reality interventions aimed at cognitive training have great potential for significant gains in different cognitive domains assessed in patients with schizophrenia. Copyright © 2015 SESPAS. Published by Elsevier Espana. All rights reserved.
Game-Based Evacuation Drill Using Augmented Reality and Head-Mounted Display
ERIC Educational Resources Information Center
Kawai, Junya; Mitsuhara, Hiroyuki; Shishibori, Masami
2016-01-01
Purpose: Evacuation drills should be more realistic and interactive. Focusing on situational and audio-visual realities and scenario-based interactivity, the authors have developed a game-based evacuation drill (GBED) system that presents augmented reality (AR) materials on tablet computers. The paper's current research purpose is to improve…
Hallucinators find meaning in noises: pareidolic illusions in dementia with Lewy bodies.
Yokoi, Kayoko; Nishio, Yoshiyuki; Uchiyama, Makoto; Shimomura, Tatsuo; Iizuka, Osamu; Mori, Etsuro
2014-04-01
By definition, visual illusions and hallucinations differ in whether the perceived objects exist in reality. A recent study challenged this dichotomy, in which pareidolias, a type of complex visual illusion involving ambiguous forms being perceived as meaningful objects, are very common and phenomenologically similar to visual hallucinations in dementia with Lewy bodies (DLB). We hypothesise that a common psychological mechanism exists between pareidolias and visual hallucinations in DLB that confers meaning upon meaningless visual information. Furthermore, we believe that these two types of visual misperceptions have a common underlying neural mechanism, namely, cholinergic insufficiency. The current study investigated pareidolic illusions using meaningless visual noise stimuli (the noise pareidolia test) in 34 patients with DLB, 34 patients with Alzheimer׳s disease and 28 healthy controls. Fifteen patients with DLB were administered the noise pareidolia test twice, before and after donepezil treatment. Three major findings were discovered: (1) DLB patients saw meaningful illusory images (pareidolias) in meaningless visual stimuli, (2) the number of pareidolic responses correlated with the severity of visual hallucinations, and (3) cholinergic enhancement reduced both the number of pareidolias and the severity of visual hallucinations in patients with DLB. These findings suggest that a common underlying psychological and neural mechanism exists between pareidolias and visual hallucinations in DLB. Copyright © 2014 Elsevier Ltd. All rights reserved.
Augmented reality user interface for mobile ground robots with manipulator arms
NASA Astrophysics Data System (ADS)
Vozar, Steven; Tilbury, Dawn M.
2011-01-01
Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
2005-03-01
The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances
Cherniack, E Paul
2011-01-01
To outline the evidence in the published medical literature suggesting the potential applications of virtual reality (VR) for the identification and rehabilitation of cognitive disorders of the elderly. Non-systematic literature review. VR, despite its more common usage by younger persons, is a potentially promising source of techniques useful in the identification and rehabilitation of cognitive disorders of the elderly. Systems employing VR can include desktop and head-mounted visual displays among other devices. Thus far, published studies have described VR-based applications in the identification and treatment of deficits in navigational skills in ambulation and driving. In addition, VR has been utilised to enhance the ability to perform activities of daily living in patients with dementia, stroke, and Parkinson's Disease. Such investigations have thus far been small, and unblinded. VR-based applications can potentially offer more versatile, comprehensive, and safer assessments of function. However, they also might be more expensive, complex and more difficult to use by elderly patients. Side effects of head-mounted visual displays include nausea and disorientation, but, have not been reported specifically in older subjects.
NASA Astrophysics Data System (ADS)
Castagnetti, C.; Giannini, M.; Rivola, R.
2017-05-01
The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds
Wright, W. Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724
Visualization of reservoir simulation data with an immersive virtual reality system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, B.K.
1996-10-01
This paper discusses an investigation into the use of an immersive virtual reality (VR) system to visualize reservoir simulation output data. The hardware and software configurations of the test-immersive VR system are described and compared to a nonimmersive VR system and to an existing workstation screen-based visualization system. The structure of 3D reservoir simulation data and the actions to be performed on the data within the VR system are discussed. The subjective results of the investigation are then presented, followed by a discussion of possible future work.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.
Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.
Do, Phuong T; Moreland, John R
2014-04-01
The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.
Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M
2010-01-01
Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.
Mediated-reality magnification for macular degeneration rehabilitation
NASA Astrophysics Data System (ADS)
Martin-Gonzalez, Anabel; Kotliar, Konstantin; Rios-Martinez, Jorge; Lanzl, Ines; Navab, Nassir
2014-10-01
Age-related macular degeneration (AMD) is a gradually progressive eye condition, which is one of the leading causes of blindness and low vision in the Western world. Prevailing optical visual aids compensate part of the lost visual function, but omitting helpful complementary information. This paper proposes an efficient magnification technique, which can be implemented on a head-mounted display, for improving vision of patients with AMD, by preserving global information of the scene. Performance of the magnification approach is evaluated by simulating central vision loss in normally sighted subjects. Visual perception was measured as a function of text reading speed and map route following speed. Statistical analysis of experimental results suggests that our magnification method improves reading speed 1.2 times and spatial orientation to find routes on a map 1.5 times compared to a conventional magnification approach, being capable to enhance peripheral vision of AMD subjects along with their life quality.
Goh, Rachel L Z; Kong, Yu Xiang George; McAlinden, Colm; Liu, John; Crowston, Jonathan G; Skalicky, Simon E
2018-01-01
To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire - Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups ( P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes ( R = 0.243-0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS ( P = 0.044) and greater age ( P = 0.009) were associated with worse stationary test person scores. Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma.
Goh, Rachel L. Z.; McAlinden, Colm; Liu, John; Crowston, Jonathan G.; Skalicky, Simon E.
2018-01-01
Purpose To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Methods Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire – Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Results Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups (P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes (R = 0.243–0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS (P = 0.044) and greater age (P = 0.009) were associated with worse stationary test person scores. Conclusions Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. Translational Relevance The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma. PMID:29372112
Effect of Virtual Reality on Cognition in Stroke Patients
Kim, Bo Ryun; Kim, Lee Suk; Park, Ji Young
2011-01-01
Objective To investigate the effect of virtual reality on the recovery of cognitive impairment in stroke patients. Method Twenty-eight patients (11 males and 17 females, mean age 64.2) with cognitive impairment following stroke were recruited for this study. All patients were randomly assigned to one of two groups, the virtual reality (VR) group (n=15) or the control group (n=13). The VR group received both virtual reality training and computer-based cognitive rehabilitation, whereas the control group received only computer-based cognitive rehabilitation. To measure, activity of daily living cognitive and motor functions, the following assessment tools were used: computerized neuropsychological test and the Tower of London (TOL) test for cognitive function assessment, Korean-Modified Barthel index (K-MBI) for functional status evaluation, and the motricity index (MI) for motor function assessment. All recruited patients underwent these evaluations before rehabilitation and four weeks after rehabilitation. Results The VR group showed significant improvement in the K-MMSE, visual and auditory continuous performance tests (CPT), forward digit span test (DST), forward and backward visual span tests (VST), visual and verbal learning tests, TOL, K-MBI, and MI scores, while the control group showed significant improvement in the K-MMSE, forward DST, visual and verbal learning tests, trail-making test-type A, TOL, K-MBI, and MI scores after rehabilitation. The changes in the visual CPT and backward VST in the VR group after rehabilitation were significantly higher than those in the control group. Conclusion Our findings suggest that virtual reality training combined with computer-based cognitive rehabilitation may be of additional benefit for treating cognitive impairment in stroke patients. PMID:22506159
Application of virtual reality graphics in assessment of concussion.
Slobounov, Semyon; Slobounov, Elena; Newell, Karl
2006-04-01
Abnormal balance in individuals suffering from traumatic brain injury (TBI) has been documented in numerous recent studies. However, specific mechanisms causing balance deficits have not been systematically examined. This paper demonstrated the destabilizing effect of visual field motion, induced by virtual reality graphics in concussed individuals but not in normal controls. Fifty five student-athletes at risk for concussion participated in this study prior to injury and 10 of these subjects who suffered MTBI were tested again on day 3, day 10, and day 30 after the incident. Postural responses to visual field motion were recorded using a virtual reality (VR) environment in conjunction with balance (AMTI force plate) and motion tracking (Flock of Birds) technologies. Two experimental conditions were introduced where subjects passively viewed VR scenes or actively manipulated the visual field motion. Long-lasting destabilizing effects of visual field motion were revealed, although subjects were asymptomatic when standard balance tests were introduced. The findings demonstrate that advanced VR technology may detect residual symptoms of concussion at least 30 days post-injury.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
An augmented reality system for upper-limb post-stroke motor rehabilitation: a feasibility study.
Assis, Gilda Aparecida de; Corrêa, Ana Grasielle Dionísio; Martins, Maria Bernardete Rodrigues; Pedrozo, Wendel Goes; Lopes, Roseli de Deus
2016-08-01
To determine the clinical feasibility of a system based on augmented reality for upper-limb (UL) motor rehabilitation of stroke participants. A physiotherapist instructed the participants to accomplish tasks in augmented reality environment, where they could see themselves and their surroundings, as in a mirror. Two case studies were conducted. Participants were evaluated pre- and post-intervention. The first study evaluated the UL motor function using Fugl-Meyer scale. Data were compared using non-parametric sign tests and effect size. The second study used the gain of motion range of shoulder flexion and abduction assessed by computerized biophotogrammetry. At a significance level of 5%, Fugl-Meyer scores suggested a trend for greater UL motor improvement in the augmented reality group than in the other. Moreover, effect size value 0.86 suggested high practical significance for UL motor rehabilitation using the augmented reality system. System provided promising results for UL motor rehabilitation, since enhancements have been observed in the shoulder range of motion and speed. Implications for Rehabilitation Gain of range of motion of flexion and abduction of the shoulder of post-stroke patients can be achieved through an augmented reality system containing exercises to promote the mental practice. NeuroR system provides a mental practice method combined with visual feedback for motor rehabilitation of chronic stroke patients, giving the illusion of injured upper-limb (UL) movements while the affected UL is resting. Its application is feasible and safe. This system can be used to improve UL rehabilitation, an additional treatment past the traditional period of the stroke patient hospitalization and rehabilitation.
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram
2016-01-15
An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.
3D chromosome rendering from Hi-C data using virtual reality
NASA Astrophysics Data System (ADS)
Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing
2015-01-01
Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.
Garrison, Jane R; Bond, Rebecca; Gibbard, Emma; Johnson, Marcia K; Simons, Jon S
2017-02-01
Reality monitoring refers to processes involved in distinguishing internally generated information from information presented in the external world, an activity thought to be based, in part, on assessment of activated features such as the amount and type of cognitive operations and perceptual content. Impairment in reality monitoring has been implicated in symptoms of mental illness and associated more widely with the occurrence of anomalous perceptions as well as false memories and beliefs. In the present experiment, the cognitive mechanisms of reality monitoring were probed in healthy individuals using a task that investigated the effects of stimulus modality (auditory vs visual) and the type of action undertaken during encoding (thought vs speech) on subsequent source memory. There was reduced source accuracy for auditory stimuli compared with visual, and when encoding was accompanied by thought as opposed to speech, and a greater rate of externalization than internalization errors that was stable across factors. Interpreted within the source monitoring framework (Johnson, Hashtroudi, & Lindsay, 1993), the results are consistent with the greater prevalence of clinically observed auditory than visual reality discrimination failures. The significance of these findings is discussed in light of theories of hallucinations, delusions and confabulation. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Yeh, Shih-Ching; Hwang, Wu-Yuin; Wang, Jin-Liang; Zhan, Shi-Yi
2013-01-01
This study intends to investigate how multi-symbolic representations (text, digits, and colors) could effectively enhance the completion of co-located/distant collaborative work in a virtual reality context. Participants' perceptions and behaviors were also studied. A haptics-enhanced virtual reality task was developed to conduct…
Thompson, Trevor; Steffert, Tony; Steed, Anthony; Gruzelier, John
2011-01-01
Case studies suggest hypnosis with a virtual reality (VR) component may be an effective intervention; although few follow-up randomized, controlled trials have been performed comparing such interventions with standard hypnotic treatments. Thirty-five healthy participants were randomized to self-hypnosis with VR imagery, standard self-hypnosis, or relaxation interventions. Changes in sleep, cortisol levels, and mood were examined. Self-hypnosis involved 10- to 20-min. sessions visualizing a healthy immune scenario. Trait absorption was also recorded as a possible moderator. Moderated regression indicated that both hypnosis interventions produced significantly lower tiredness ratings than relaxation when trait absorption was high. When trait absorption was low, VR resulted in significantly higher engagement ratings, although this did not translate to demonstrable improvement in outcome. Results suggest that VR imagery may increase engagement relative to traditional methods, but further investigation into its potential to enhance therapeutic efficacy is required.
Augmented reality in intraventricular neuroendoscopy.
Finger, T; Schaumann, A; Schulz, M; Thomale, Ulrich-W
2017-06-01
Individual planning of the entry point and the use of navigation has become more relevant in intraventricular neuroendoscopy. Navigated neuroendoscopic solutions are continuously improving. We describe experimentally measured accuracy and our first experience with augmented reality-enhanced navigated neuroendoscopy for intraventricular pathologies. Augmented reality-enhanced navigated endoscopy was tested for accuracy in an experimental setting. Therefore, a 3D-printed head model with a right parietal lesion was scanned with a thin-sliced computer tomography. Segmentation of the tumor lesion was performed using Scopis NovaPlan navigation software. An optical reference matrix is used to register the neuroendoscope's geometry and its field of view. The pre-planned ROI and trajectory are superimposed in the endoscopic image. The accuracy of the superimposed contour fitting on endoscopically visualized lesion was acquired by measuring the deviation of both midpoints to one another. The technique was subsequently used in 29 cases with CSF circulation pathologies. Navigation planning included defining the entry points, regions of interests and trajectories, superimposed as augmented reality on the endoscopic video screen during intervention. Patients were evaluated for postoperative imaging, reoperations, and possible complications. The experimental setup revealed a deviation of the ROI's midpoint from the real target by 1.2 ± 0.4 mm. The clinical study included 18 cyst fenestrations, ten biopsies, seven endoscopic third ventriculostomies, six stent placements, and two shunt implantations, being eventually combined in some patients. In cases of cyst fenestrations postoperatively, the cyst volume was significantly reduced in all patients by mean of 47%. In biopsies, the diagnostic yield was 100%. Reoperations during a follow-up period of 11.4 ± 10.2 months were necessary in two cases. Complications included one postoperative hygroma and one insufficient fenestration. Augmented reality-navigated neuroendoscopy is accurate and feasible to use in clinical application. By integrating relevant planning information directly into the endoscope's field of view, safety and efficacy for intraventricular neuroendoscopic surgery may be improved.
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis; ...
2015-02-13
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2006-01-01
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed wi th respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the use ful specifications of augmented reality displays, an optical see-thro ugh display was used in an ATC Tower simulation. Three different binocular fields of view (14 deg, 28 deg, and 47 deg) were examined to det ermine their effect on subjects# ability to detect aircraft maneuveri ng and landing. The results suggest that binocular fields of view much greater than 47 deg are unlikely to dramatically improve search perf ormance and that partial binocular overlap is a feasible display tech nique for augmented reality Tower applications.
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Gomez, Jocelyn; Hoffman, Hunter G; Bistricky, Steven L; Gonzalez, Miriam; Rosenberg, Laura; Sampaio, Mariana; Garcia-Palacios, Azucena; Navarro-Haro, Maria V; Alhalabi, Wadee; Rosenberg, Marta; Meyer, Walter J; Linehan, Marsha M
2017-01-01
Sustaining a burn injury increases an individual's risk of developing psychological problems such as generalized anxiety, negative emotions, depression, acute stress disorder, or post-traumatic stress disorder. Despite the growing use of Dialectical Behavioral Therapy® (DBT®) by clinical psychologists, to date, there are no published studies using standard DBT® or DBT® skills learning for severe burn patients. The current study explored the feasibility and clinical potential of using Immersive Virtual Reality (VR) enhanced DBT® mindfulness skills training to reduce negative emotions and increase positive emotions of a patient with severe burn injuries. The participant was a hospitalized (in house) 21-year-old Spanish speaking Latino male patient being treated for a large (>35% TBSA) severe flame burn injury. Methods: The patient looked into a pair of Oculus Rift DK2 virtual reality goggles to perceive the computer-generated virtual reality illusion of floating down a river, with rocks, boulders, trees, mountains, and clouds, while listening to DBT® mindfulness training audios during 4 VR sessions over a 1 month period. Study measures were administered before and after each VR session. Results: As predicted, the patient reported increased positive emotions and decreased negative emotions. The patient also accepted the VR mindfulness treatment technique. He reported the sessions helped him become more comfortable with his emotions and he wanted to keep using mindfulness after returning home. Conclusions: Dialectical Behavioral Therapy is an empirically validated treatment approach that has proved effective with non-burn patient populations for treating many of the psychological problems experienced by severe burn patients. The current case study explored for the first time, the use of immersive virtual reality enhanced DBT® mindfulness skills training with a burn patient. The patient reported reductions in negative emotions and increases in positive emotions, after VR DBT® mindfulness skills training. Immersive Virtual Reality is becoming widely available to mainstream consumers, and thus has the potential to make this treatment available to a much wider number of patient populations, including severe burn patients. Additional development, and controlled studies are needed.
Visualization of molecular structures using HoloLens-based augmented reality
Hoffman, MA; Provance, JB
2017-01-01
Biological molecules and biologically active small molecules are complex three dimensional structures. Current flat screen monitors are limited in their ability to convey the full three dimensional characteristics of these molecules. Augmented reality devices, including the Microsoft HoloLens, offer an immersive platform to change how we interact with molecular visualizations. We describe a process to incorporate the three dimensional structures of small molecules and complex proteins into the Microsoft HoloLens using aspirin and the human leukocyte antigen (HLA) as examples. Small molecular structures can be introduced into the HoloStudio application, which provides native support for rotating, resizing and performing other interactions with these molecules. Larger molecules can be imported through the Unity gaming development platform and then Microsoft Visual Developer. The processes described here can be modified to import a wide variety of molecular structures into augmented reality systems and improve our comprehension of complex structural features. PMID:28815109
Visual Environment for Designing Interactive Learning Scenarios with Augmented Reality
ERIC Educational Resources Information Center
Mota, José Miguel; Ruiz-Rube, Iván; Dodero, Juan Manuel; Figueiredo, Mauro
2016-01-01
Augmented Reality (AR) technology allows the inclusion of virtual elements on a vision of actual physical environment for the creation of a mixed reality in real time. This kind of technology can be used in educational settings. However, the current AR authoring tools present several drawbacks, such as, the lack of a mechanism for tracking the…
Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T
2007-07-01
Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.
Transfer of perceptual adaptation to space sickness: What enhances an individual's ability to adapt?
NASA Technical Reports Server (NTRS)
1993-01-01
The objectives of this project were to explore systematically the determiners of transfer of perceptual adaptation as these principles might apply to the space adaptation syndrome. The perceptual experience of an astronaut exposed to the altered gravitational forces involved in spaceflight shares much with that of the subject exposed in laboratory experiments to optically induced visual rearrangement with tilt and dynamic motion illusions such as vection; and experiences and symptoms reported by the trainee who is exposed to the compellingly realistic visual imagery of flight simulators and virtual reality systems. In both of these cases the observer is confronted with a variety of inter- and intrasensory conflicts that initially disrupt perception, as well as behavior, and also produce symptoms of motion sickness.
Visual Prostheses: The Enabling Technology to Give Sight to the Blind
Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad
2014-01-01
Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346
Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-05-01
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad
2013-12-01
Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.
Mobile Virtual Reality : A Solution for Big Data Visualization
NASA Astrophysics Data System (ADS)
Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.
2015-12-01
Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.
Advanced Helmet Mounted Display (AHMD) for simulator applications
NASA Astrophysics Data System (ADS)
Sisodia, Ashok; Riser, Andrew; Bayer, Michael; McGuire, James P.
2006-05-01
The Advanced Helmet Mounted Display (AHMD), augmented reality visual system first presented at last year's Cockpit and Future Displays for Defense and Security conference, has now been evaluated in a number of military simulator applications and by L-3 Link Simulation and Training. This paper presents the preliminary results of these evaluations and describes current and future simulator and training applications for HMD technology. The AHMD blends computer-generated data (symbology, synthetic imagery, enhanced imagery) with the actual and simulated visible environment. The AHMD is designed specifically for highly mobile deployable, minimum resource demanding reconfigurable virtual training systems to satisfy the military's in-theater warrior readiness objective. A description of the innovative AHMD system and future enhancements will be discussed.
The Virtual Pelvic Floor, a tele-immersive educational environment.
Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.
1999-01-01
This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378
Rehabilitation of Visual and Perceptual Dysfunction After Severe Traumatic Brain Injury
2012-03-26
about this amount. 10 C. Collision judgments in virtual mall walking simulator The virtual mall is a virtual reality model of a real shopping...expanded vision from the prisms (Figure 5b). Figure 4. Illustration of the virtual reality mall set-up and collision judgment task. Participants...1 AD_________________ Award Number: W81XWH-11-2-0082 TITLE: Rehabilitation of Visual and Perceptual Dysfunction after Severe
Alternate Reality Gaming: Teaching Visual Art Skills to Multiple Subject Credential Candidates
ERIC Educational Resources Information Center
Engdahl, Eric
2014-01-01
Through Alternate Reality Games, students engage in narrative, play, cooperation, creativity, and joy by creating storylines focusing on one element of art. In 2008, the Luce Center at the Smithsonian American Art Museum presented Ghosts of a Chance. The first Alternative Reality Game (ARG) to be hosted by a museum, it was designed to deepen and…
Teaching and assessing competence in cataract surgery.
Henderson, Bonnie An; Ali, Rasha
2007-02-01
To review recent literature regarding innovative techniques, methods of teaching and assessing competence and skill in cataract surgery. The need for assessment of surgical competency and the requirement of wet lab facilities in ophthalmic training programs are being increasingly emphasized. Authors have proposed the use of standardized forms to collect objective and subjective data regarding the residents' surgical performance. Investigators have reported methods to improve visualization of cadaver and animal eyes for the wet lab, including the use of capsular dyes. The discussion of virtual reality as a teaching tool for surgical programs continues. Studies have proven that residents trained on a laparoscopic simulator outperformed nontrained residents during actual surgery for both surgical times and numbers of errors. Besides virtual reality systems, a program is being developed to separate the cognitive portion from the physical aspects of surgery. Another program couples surgical videos with three-dimensional animation to enhance the trainees' topographical understanding. Proper assessment of surgical competency is becoming an important focus of training programs. The use of surgical data forms may assist in standardizing objective assessments. Virtual reality, cognitive curriculum and animation video programs can be helpful in improving residents' surgical performance.
A systematic review of phacoemulsification cataract surgery in virtual reality simulators.
Lam, Chee Kiang; Sundaraj, Kenneth; Sulaiman, Mohd Nazri
2013-01-01
The aim of this study was to review the capability of virtual reality simulators in the application of phacoemulsification cataract surgery training. Our review included the scientific publications on cataract surgery simulators that had been developed by different groups of researchers along with commercialized surgical training products, such as EYESI® and PhacoVision®. The review covers the simulation of the main cataract surgery procedures, i.e., corneal incision, capsulorrhexis, phacosculpting, and intraocular lens implantation in various virtual reality surgery simulators. Haptics realism and visual realism of the procedures are the main elements in imitating the actual surgical environment. The involvement of ophthalmology in research on virtual reality since the early 1990s has made a great impact on the development of surgical simulators. Most of the latest cataract surgery training systems are able to offer high fidelity in visual feedback and haptics feedback, but visual realism, such as the rotational movements of an eyeball with response to the force applied by surgical instruments, is still lacking in some of them. The assessment of the surgical tasks carried out on the simulators showed a significant difference in the performance before and after the training.
NASA Astrophysics Data System (ADS)
Schonlau, William J.
2006-05-01
An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.
Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L
2004-01-01
Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.
Subjective visual vertical assessment with mobile virtual reality system.
Ulozienė, Ingrida; Totilienė, Milda; Paulauskas, Andrius; Blažauskas, Tomas; Marozas, Vaidotas; Kaski, Diego; Ulozas, Virgilijus
2017-01-01
The subjective visual vertical (SVV) is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions - static, dynamic and an immersive real-world ("boat in the sea") SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two controllers (P<0.01). The median of virtual reality-induced dizziness for both devices was 0.7. The mobile virtual reality based system for implementation of subjective visual vertical test, is accurate and applicable in the clinical environment. The gamepad-based virtual object control method was preferred by the users. The tests were well tolerated with low dizziness scores in the majority of patients. Copyright © 2018 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. All rights reserved.
Affective three-dimensional brain-computer interface created using a prism array-based display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul
2014-12-01
To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Matching optical flow to motor speed in virtual reality while running on a treadmill
Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564
Matching optical flow to motor speed in virtual reality while running on a treadmill.
Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
Comparative evaluation of monocular augmented-reality display for surgical microscopes.
Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N
2012-01-01
Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.
Multisensory Integration in the Virtual Hand Illusion with Active Movement
Satoh, Satoru; Hachimura, Kozaburo
2016-01-01
Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822
Enhancing Education through Mobile Augmented Reality
ERIC Educational Resources Information Center
Joan, D. R. Robert
2015-01-01
In this article, the author has discussed about the Mobile Augmented Reality and enhancing education through it. The aim of the present study was to give some general information about mobile augmented reality which helps to boost education. Purpose of the current study reveals the mobile networks which are used in the institution campus as well…
Analysis of brain activity and response during monoscopic and stereoscopic visualization
NASA Astrophysics Data System (ADS)
Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele
2012-03-01
Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.
Monte Carlo simulations of medical imaging modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less
Virtual reality as a tool for cross-cultural communication: an example from military team training
NASA Astrophysics Data System (ADS)
Downes-Martin, Stephen; Long, Mark; Alexander, Joanna R.
1992-06-01
A major problem with communication across cultures, whether professional or national, is that simple language translation if often insufficient to communicate the concepts. This is especially true when the communicators come from highly specialized fields of knowledge or from national cultures with long histories of divergence. This problem becomes critical when the goal of the communication is national negotiation dealing with such high risk items as arms negotiation or trade wars. Virtual Reality technology has considerable potential for facilitating communication across cultures, by immersing the communicators within multiple visual representations of the concepts, and providing control over those representations. Military distributed team training provides a model for virtual reality suitable for cross cultural communication such as negotiation. In both team training and negotiation, the participants must cooperate, agree on a set of goals, and achieve mastery over the concepts being negotiated. Team training technologies suitable for supporting cross cultural negotiation exist (branch wargaming, computer image generation and visualization, distributed simulation), and have developed along different lines than traditional virtual reality technology. Team training de-emphasizes the realism of physiological interfaces between the human and the virtual reality, and emphasizes the interaction of humans with each other and with intelligent simulated agents within the virtual reality. This approach to virtual reality is suggested as being more fruitful for future work.
Visualization of spatial-temporal data based on 3D virtual scene
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang
2009-10-01
The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Besharati Tabrizi, Leila; Mahvash, Mehran
2015-07-01
An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.
Kanumuri, Prathima; Ganai, Sabha; Wohaibi, Eyad M.; Bush, Ronald W.; Grow, Daniel R.
2008-01-01
Background: The study aim was to compare the effectiveness of virtual reality and computer-enhanced video-scopic training devices for training novice surgeons in complex laparoscopic skills. Methods: Third-year medical students received instruction on laparoscopic intracorporeal suturing and knot tying and then underwent a pretraining assessment of the task using a live porcine model. Students were then randomized to objectives-based training on either the virtual reality (n=8) or computer-enhanced (n=8) training devices for 4 weeks, after which the assessment was repeated. Results: Posttraining performance had improved compared with pretraining performance in both task completion rate (94% versus 18%; P<0.001*) and time [181±58 (SD) versus 292±24*]. Performance of the 2 groups was comparable before and after training. Of the subjects, 88% thought that haptic cues were important in simulators. Both groups agreed that their respective training systems were effective teaching tools, but computer-enhanced device trainees were more likely to rate their training as representative of reality (P<0.01). Conclusions: Training on virtual reality and computer-enhanced devices had equivalent effects on skills improvement in novices. Despite the perception that haptic feedback is important in laparoscopic simulation training, its absence in the virtual reality device did not impede acquisition of skill. PMID:18765042
Head-mounted spatial instruments: Synthetic reality or impossible dream
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Grunwald, Arthur; Velger, Mordekhai
1988-01-01
A spatial instrument is defined as a display device which has been either geometrically or symbolically enhanced to better enable a user to accomplish a particular task. Research conducted over the past several years on 3-D spatial instruments has shown that perspective displays, even when viewed from the correct viewpoint, are subject to systematic viewer biases. These biases interfere with correct spatial judgements of the presented pictorial information. It is also found that deliberate, appropriate geometric distortion of the perspective projection of an image can improve user performance. These two findings raise intriguing questions concerning the design of head-mounted spatial instruments. The design of such instruments may not only require the introduction of compensatory distortions to remove the neutrally occurring biases but also may significantly benefit from the introduction of artificial distortions which enhance performance. These image manipulations, however, can cause a loss of visual-vestibular coordination and induce motion sickness. Additionally, adaptation to these manipulations is apt to be impaired by computational delays in the image display. Consequently, the design of head-mounted spatial instruments will require an understanding of the tolerable limits of visual-vestibular discord.
Bang, Yo-Soon; Son, Kyung Hyun; Kim, Hyun Jin
2016-11-01
[Purpose] The purpose of this study is to investigate the effects of virtual reality training using Nintendo Wii on balance and walking for stroke patients. [Subjects and Methods] Forty stroke patients with stroke were randomly divided into two exercise program groups: virtual reality training (n=20) and treadmill (n=20). The subjects underwent their 40-minute exercise program three times a week for eight weeks. Their balance and walking were measured before and after the complete program. We measured the left/right weight-bearing and the anterior/posterior weight-bearing for balance, as well as stance phase, swing phase, and cadence for walking. [Results] For balance, both groups showed significant differences in the left/right and anterior/posterior weight-bearing, with significant post-program differences between the groups. For walking, there were significant differences in the stance phase, swing phase, and cadence of the virtual reality training group. [Conclusion] The results of this study suggest that virtual reality training providing visual feedback may enable stroke patients to directly adjust their incorrect weight center and shift visually. Virtual reality training may be appropriate for patients who need improved balance and walking ability by inducing their interest for them to perform planned exercises on a consistent basis.
Bang, Yo-Soon; Son, Kyung Hyun; Kim, Hyun Jin
2016-01-01
[Purpose] The purpose of this study is to investigate the effects of virtual reality training using Nintendo Wii on balance and walking for stroke patients. [Subjects and Methods] Forty stroke patients with stroke were randomly divided into two exercise program groups: virtual reality training (n=20) and treadmill (n=20). The subjects underwent their 40-minute exercise program three times a week for eight weeks. Their balance and walking were measured before and after the complete program. We measured the left/right weight-bearing and the anterior/posterior weight-bearing for balance, as well as stance phase, swing phase, and cadence for walking. [Results] For balance, both groups showed significant differences in the left/right and anterior/posterior weight-bearing, with significant post-program differences between the groups. For walking, there were significant differences in the stance phase, swing phase, and cadence of the virtual reality training group. [Conclusion] The results of this study suggest that virtual reality training providing visual feedback may enable stroke patients to directly adjust their incorrect weight center and shift visually. Virtual reality training may be appropriate for patients who need improved balance and walking ability by inducing their interest for them to perform planned exercises on a consistent basis. PMID:27942130
Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.
Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R
2018-07-01
We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.
Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.
Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A
2007-06-01
This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.
Visually representing reality: aesthetics and accessibility aspects
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2009-02-01
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
Change Blindness Phenomena for Virtual Reality Display Systems.
Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete
2011-09-01
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
Soldier-worn augmented reality system for tactical icon visualization
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
2012-06-01
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
A Planetarium Inside Your Office: Virtual Reality in the Dome Production Pipeline
NASA Astrophysics Data System (ADS)
Summers, Frank
2018-01-01
Producing astronomy visualization sequences for a planetarium without ready access to a dome is a distorted geometric challenge. Fortunately, one can now use virtual reality (VR) to simulate a dome environment without ever leaving one's office chair. The VR dome experience has proven to be a more than suitable pre-visualization method that requires only modest amounts of processing beyond the standard production pipeline. It also provides a crucial testbed for identifying, testing, and fixing the visual constraints and artifacts that arise in a spherical presentation environment. Topics adreesed here will include rendering, geometric projection, movie encoding, software playback, and hardware setup for a virtual dome using VR headsets.
Virtual Reality as Innovative Approach to the Interior Designing
NASA Astrophysics Data System (ADS)
Kaleja, Pavol; Kozlovská, Mária
2017-06-01
We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies
Critical Reading: Visual Skills.
ERIC Educational Resources Information Center
Adams, Dennis M.
The computer controlled visual media, particularly television, are becoming an increasingly powerful instrument for the manipulation of thought. Powerful visual images increasingly reflect and shape personal and external reality--politics being one such example--and it is crucial that the viewing public understand the nature of these media…
NASA Astrophysics Data System (ADS)
Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming
2005-08-01
Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.
Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.
Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A
2013-01-01
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.
Virtual Reality: A New Learning Environment.
ERIC Educational Resources Information Center
Ferrington, Gary; Loge, Kenneth
1992-01-01
Discusses virtual reality (VR) technology and its possible uses in military training, medical education, industrial design and development, the media industry, and education. Three primary applications of VR in the learning process--visualization, simulation, and construction of virtual worlds--are described, and pedagogical and moral issues are…
Aural-Visual-Kinesthetic Imagery in Motion Media.
ERIC Educational Resources Information Center
Allan, David W.
Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…
Virtual reality simulation in neurosurgery: technologies and evolution.
Chan, Sonny; Conti, François; Salisbury, Kenneth; Blevins, Nikolas H
2013-01-01
Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.
Utilization of virtual reality for endotracheal intubation training.
Mayrose, James; Kesavadas, T; Chugh, Kevin; Joshi, Dhananjay; Ellis, David G
2003-10-01
Tracheal intubation is performed for urgent airway control in injured patients. Current methods of training include working on cadavers and manikins, which lack the realism of a living human being. Work in this field has been limited due to the complex nature of simulating in real-time, the interactive forces and deformations which occur during an actual patient intubation. This study addressed the issue of intubation training in an attempt to bridge the gap between actual and virtual patient scenarios. The haptic device along with the real-time performance of the simulator give it both visual and physical realism. The three-dimensional viewing and interaction available through virtual reality make it possible for physicians, pre-hospital personnel and students to practice many endotracheal intubations without ever touching a patient. The ability for a medical professional to practice a procedure multiple times prior to performing it on a patient will both enhance the skill of the individual while reducing the risk to the patient.
Virtual reality adaptive stimulation of limbic networks in the mental readiness training.
Cosić, Kresimir; Popović, Sinisa; Kostović, Ivica; Judas, Milos
2010-01-01
A significant proportion of severe psychological problems in recent large-scale peacekeeping operations underscores the importance of effective methods for strengthening the stress resilience. Virtual reality (VR) adaptive stimulation, based on the estimation of the participant's emotional state from physiological signals, may enhance the mental readiness training (MRT). Understanding neurobiological mechanisms by which the MRT based on VR adaptive stimulation can affect the resilience to stress is important for practical application in the stress resilience management. After the delivery of a traumatic audio-visual stimulus in the VR, the cascade of events occurs in the brain, which evokes various physiological manifestations. In addition to the "limbic" emotional and visceral brain circuitry, other large-scale sensory, cognitive, and memory brain networks participate with less known impact in this physiological response. The MRT based on VR adaptive stimulation may strengthen the stress resilience through targeted brain-body interactions. Integrated interdisciplinary efforts, which would integrate the brain imaging and the proposed approach, may contribute to clarifying the neurobiological foundation of the resilience to stress.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.
Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma
Black, Alex A.
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433
ERIC Educational Resources Information Center
Cooper, Rory A.; Ding, Dan; Simpson, Richard; Fitzgerald, Shirley G.; Spaeth, Donald M.; Guo, Songfeng; Koontz, Alicia M.; Cooper, Rosemarie; Kim, Jongbae; Boninger, Michael L.
2005-01-01
Some aspects of assistive technology can be enhanced by the application of virtual reality. Although virtual simulation offers a range of new possibilities, learning to navigate in a virtual environment is not equivalent to learning to navigate in the real world. Therefore, virtual reality simulation is advocated as a useful preparation for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric A. Wernert; William R. Sherman; Patrick O'Leary
Immersive visualization makes use of the medium of virtual reality (VR) - it is a subset of virtual reality focused on the application of VR technologies to scientific and information visualization. As the name implies, there is a particular focus on the physically immersive aspect of VR that more fully engages the perceptual and kinesthetic capabilities of the scientist with the goal of producing greater insight. The immersive visualization community is uniquely positioned to address the analysis needs of the wide spectrum of domain scientists who are becoming increasingly overwhelmed by data. The outputs of computational science simulations and high-resolutionmore » sensors are creating a data deluge. Data is coming in faster than it can be analyzed, and there are countless opportunities for discovery that are missed as the data speeds by. By more fully utilizing the scientists visual and other sensory systems, and by offering a more natural user interface with which to interact with computer-generated representations, immersive visualization offers great promise in taming this data torrent. However, increasing the adoption of immersive visualization in scientific research communities can only happen by simultaneously lowering the engagement threshold while raising the measurable benefits of adoption. Scientists time spent immersed with their data will thus be rewarded with higher productivity, deeper insight, and improved creativity. Immersive visualization ties together technologies and methodologies from a variety of related but frequently disjoint areas, including hardware, software and human-computer interaction (HCI) disciplines. In many ways, hardware is a solved problem. There are well established technologies including large walk-in systems such as the CAVE{trademark} and head-based systems such as the Wide-5{trademark}. The advent of new consumer-level technologies now enable an entirely new generation of immersive displays, with smaller footprints and costs, widening the potential consumer base. While one would be hard-pressed to call software a solved problem, we now understand considerably more about best practices for designing and developing sustainable, scalable software systems, and we have useful software examples that illuminate the way to even better implementations. As with any research endeavour, HCI will always be exploring new topics in interface design, but we now have a sizable knowledge base of the strengths and weaknesses of the human perceptual systems and we know how to design effective interfaces for immersive systems. So, in a research landscape with a clear need for better visualization and analysis tools, a methodology in immersive visualization that has been shown to effectively address some of those needs, and vastly improved supporting technologies and knowledge of hardware, software, and HCI, why hasn't immersive visualization 'caught on' more with scientists? What can we do as a community of immersive visualization researchers and practitioners to facilitate greater adoption by scientific communities so as to make the transition from 'the promise of virtual reality' to 'the reality of virtual reality'.« less
2009-09-01
Environmental Medicine USN United States Navy VAE Virtual Air Environment VACP Visual, Auditory, Cognitive, Psychomotor (demand) VR Virtual Reality ...0 .5 m/s. Another useful approach to capturing leg, trunk, whole body, or movement tasks comes from virtual reality - based training research and...referred to as semi-automated forces (SAF). From: http://www.sedris.org/glossary.htm#C_grp. Constructive Models Abstractions from the reality to
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
PC-Based Virtual Reality for CAD Model Viewing
ERIC Educational Resources Information Center
Seth, Abhishek; Smith, Shana S.-F.
2004-01-01
Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…
3D interactive augmented reality-enhanced digital learning systems for mobile devices
NASA Astrophysics Data System (ADS)
Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie
2013-03-01
With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.
iview: an interactive WebGL visualizer for protein-ligand complex.
Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon
2014-02-25
Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.
Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter
2015-03-01
Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.
Perform light and optic experiments in Augmented Reality
NASA Astrophysics Data System (ADS)
Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan; Javahiraly, Nicolas; Israel, Kai
2015-10-01
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university's laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based. Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one's perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content. The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
Augmented REality Sandtables (ARESs) Impact on Learning
2016-07-01
Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...The use of augmented reality ( AR ) to supplement training tools, specifically sand tables, can produce highly effective systems at relatively low...engagement and enhanced-scenario customization. The Augmented REality Sandtable ( ARES ) is projected to enhance training and retention of spatial
NASA Astrophysics Data System (ADS)
Baukal, Charles Edward, Jr.
A literature search revealed very little information on how to teach working engineers, which became the motivation for this research. Effective training is important for many reasons such as preventing accidents, maximizing fuel efficiency, minimizing pollution emissions, and reducing equipment downtime. The conceptual framework for this study included the development of a new instructional design framework called the Multimedia Cone of Abstraction (MCoA). This was developed by combining Dale's Cone of Experience and Mayer's Cognitive Theory of Multimedia Learning. An anonymous survey of 118 engineers from a single Midwestern manufacturer was conducted to determine their demographics, learning strategy preferences, verbal-visual cognitive styles, and multimedia preferences. The learning strategy preference profile and verbal-visual cognitive styles of the sample were statistically significantly different than the general population. The working engineers included more Problem Solvers and were much more visually-oriented than the general population. To study multimedia preferences, five of the seven levels in the MCoA were used. Eight types of multimedia were compared in four categories (types in parantheses): text (text and narration), static graphics (drawing and photograph), non-interactive dynamic graphics (animation and video), and interactive dynamic graphics (simulated virtual reality and real virtual reality). The first phase of the study examined multimedia preferences within a category. Participants compared multimedia types in pairs on dual screens using relative preference, rating, and ranking. Surprisingly, the more abstract multimedia (text, drawing, animation, and simulated virtual reality) were preferred in every category to the more concrete multimedia (narration, photograph, video, and real virtual reality), despite the fact that most participants had relatively little prior subject knowledge. However, the more abstract graphics were only slightly preferred to the more concrete graphics. In the second phase, the more preferred multimedia types in each category from the first phase were compared against each other using relative preference, rating, and ranking and overall rating and ranking. Drawing was the most preferred multimedia type overall, although only slightly more than animation and simulated virtual reality. Text was a distant fourth. These results suggest that instructional content for continuing engineering education should include problem solving and should be highly visual.
1.5 T augmented reality navigated interventional MRI: paravertebral sympathetic plexus injections
Marker, David R.; U-Thainual, Paweena; Ungi, Tamas; Flammang, Aaron J.; Fichtinger, Gabor; Iordachita, Iulian I.; Carrino, John A.; Fritz, Jan
2017-01-01
PURPOSE The high contrast resolution and absent ionizing radiation of interventional magnetic resonance imaging (MRI) can be advantageous for paravertebral sympathetic nerve plexus injections. We assessed the feasibility and technical performance of MRI-guided paravertebral sympathetic injections utilizing augmented reality navigation and 1.5 T MRI scanner. METHODS A total of 23 bilateral injections of the thoracic (8/23, 35%), lumbar (8/23, 35%), and hypogastric (7/23, 30%) paravertebral sympathetic plexus were prospectively planned in twelve human cadavers using a 1.5 Tesla (T) MRI scanner and augmented reality navigation system. MRI-conditional needles were used. Gadolinium-DTPA-enhanced saline was injected. Outcome variables included the number of control magnetic resonance images, target error of the needle tip, punctures of critical nontarget structures, distribution of the injected fluid, and procedure length. RESULTS Augmented-reality navigated MRI guidance at 1.5 T provided detailed anatomical visualization for successful targeting of the paravertebral space, needle placement, and perineural paravertebral injections in 46 of 46 targets (100%). A mean of 2 images (range, 1–5 images) were required to control needle placement. Changes of the needle trajectory occurred in 9 of 46 targets (20%) and changes of needle advancement occurred in 6 of 46 targets (13%), which were statistically not related to spinal regions (P = 0.728 and P = 0.86, respectively) and cadaver sizes (P = 0.893 and P = 0.859, respectively). The mean error of the needle tip was 3.9±1.7 mm. There were no punctures of critical nontarget structures. The mean procedure length was 33±12 min. CONCLUSION 1.5 T augmented reality-navigated interventional MRI can provide accurate imaging guidance for perineural injections of the thoracic, lumbar, and hypogastric sympathetic plexus. PMID:28420598
1.5 T augmented reality navigated interventional MRI: paravertebral sympathetic plexus injections.
Marker, David R; U Thainual, Paweena; Ungi, Tamas; Flammang, Aaron J; Fichtinger, Gabor; Iordachita, Iulian I; Carrino, John A; Fritz, Jan
2017-01-01
The high contrast resolution and absent ionizing radiation of interventional magnetic resonance imaging (MRI) can be advantageous for paravertebral sympathetic nerve plexus injections. We assessed the feasibility and technical performance of MRI-guided paravertebral sympathetic injections utilizing augmented reality navigation and 1.5 T MRI scanner. A total of 23 bilateral injections of the thoracic (8/23, 35%), lumbar (8/23, 35%), and hypogastric (7/23, 30%) paravertebral sympathetic plexus were prospectively planned in twelve human cadavers using a 1.5 Tesla (T) MRI scanner and augmented reality navigation system. MRI-conditional needles were used. Gadolinium-DTPA-enhanced saline was injected. Outcome variables included the number of control magnetic resonance images, target error of the needle tip, punctures of critical nontarget structures, distribution of the injected fluid, and procedure length. Augmented-reality navigated MRI guidance at 1.5 T provided detailed anatomical visualization for successful targeting of the paravertebral space, needle placement, and perineural paravertebral injections in 46 of 46 targets (100%). A mean of 2 images (range, 1-5 images) were required to control needle placement. Changes of the needle trajectory occurred in 9 of 46 targets (20%) and changes of needle advancement occurred in 6 of 46 targets (13%), which were statistically not related to spinal regions (P = 0.728 and P = 0.86, respectively) and cadaver sizes (P = 0.893 and P = 0.859, respectively). The mean error of the needle tip was 3.9±1.7 mm. There were no punctures of critical nontarget structures. The mean procedure length was 33±12 min. 1.5 T augmented reality-navigated interventional MRI can provide accurate imaging guidance for perineural injections of the thoracic, lumbar, and hypogastric sympathetic plexus.
The Use of Virtual Reality in Psychology: A Case Study in Visual Perception
Wilson, Christopher J.; Soranzo, Alessandro
2015-01-01
Recent proliferation of available virtual reality (VR) tools has seen increased use in psychological research. This is due to a number of advantages afforded over traditional experimental apparatus such as tighter control of the environment and the possibility of creating more ecologically valid stimulus presentation and response protocols. At the same time, higher levels of immersion and visual fidelity afforded by VR do not necessarily evoke presence or elicit a “realistic” psychological response. The current paper reviews some current uses for VR environments in psychological research and discusses some ongoing questions for researchers. Finally, we focus on the area of visual perception, where both the advantages and challenges of VR are particularly salient. PMID:26339281
Virtual reality and 3D visualizations in heart surgery education.
Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver
2002-01-01
Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.
Human factors of intelligent computer aided display design
NASA Technical Reports Server (NTRS)
Hunt, R. M.
1985-01-01
Design concepts for a decision support system being studied at NASA Langley as an aid to visual display unit (VDU) designers are described. Ideally, human factors should be taken into account by VDU designers. In reality, although the human factors database on VDUs is small, such systems must be constantly developed. Human factors are therefore a secondary consideration. An expert system will thus serve mainly in an advisory capacity. Functions can include facilitating the design process by shortening the time to generate and alter drawings, enhancing the capability of breaking design requirements down into simpler functions, and providing visual displays equivalent to the final product. The VDU system could also discriminate, and display the difference, between designer decisions and machine inferences. The system could also aid in analyzing the effects of designer choices on future options and in ennunciating when there are data available on a design selections.
NASA Astrophysics Data System (ADS)
Brinkmann, Benjamin H.; O'Brien, Terence J.; Robb, Richard A.; Sharbrough, Frank W.
1997-05-01
Advances in neuroimaging have enhanced the clinician's ability to localize the epileptogenic zone in focal epilepsy, but 20-50 percent of these cases still remain unlocalized. Many sophisticated modalities have been used to study epilepsy, but scalp electrode recorded electroencephalography is particularly useful due to its noninvasive nature and excellent temporal resolution. This study is aimed at specific locations of scalp electrode EEG information for correlation with anatomical structures in the brain. 3D position localizing devices commonly used in virtual reality systems are used to digitize the coordinates of scalp electrodes in a standard clinical configuration. The electrode coordinates are registered with a high- resolution MRI dataset using a robust surface matching algorithm. Volume rendering can then be used to visualize the electrodes and electrode potentials interpolated over the scalp. The accuracy of the coordinate registration is assessed quantitatively with a realistic head phantom.
Subbotsky, Eugene; Slater, Elizabeth
2011-04-01
Six- and nine-yr.-old children (n=28 of each) were divided into equal experimental and control groups. The experimental groups were shown a film with a magical theme, and the control groups were shown a film with a nonmagical theme. All groups then were presented with a choice task requiring them to discriminate between ordinary and fantastic visual displays on a computer screen. Statistical analyses indicated that mean scores for correctly identifying the ordinary and fantastic displays were significantly different between experimental and control groups. The children in the experimental groups who watched the magical film had significantly higher scores on correct identifications than children in the control groups who watched the nonmagical film for both age groups. The results suggest that watching films with a magical theme might enhance children's sensitivity toward the fantasy/reality distinction.
Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges
NASA Astrophysics Data System (ADS)
Cherukuru, N. W.; Calhoun, R.
2016-06-01
Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.
Preliminary development of augmented reality systems for spinal surgery
NASA Astrophysics Data System (ADS)
Nguyen, Nhu Q.; Ramjist, Joel M.; Jivraj, Jamil; Jakubovic, Raphael; Deorajh, Ryan; Yang, Victor X. D.
2017-02-01
Surgical navigation has been more actively deployed in open spinal surgeries due to the need for improved precision during procedures. This is increasingly difficult in minimally invasive surgeries due to the lack of visual cues caused by smaller exposure sites, and increases a surgeon's dependence on their knowledge of anatomical landmarks as well as the CT or MRI images. The use of augmented reality (AR) systems and registration technologies in spinal surgeries could allow for improvements to techniques by overlaying a 3D reconstruction of patient anatomy in the surgeon's field of view, creating a mixed reality visualization. The AR system will be capable of projecting the 3D reconstruction onto a field and preliminary object tracking on a phantom. Dimensional accuracy of the mixed media will also be quantified to account for distortions in tracking.
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Goddard, C.
2017-12-01
Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.
Virtual reality, augmented reality…I call it i-Reality.
Grossmann, Rafael J
2015-01-01
The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients.
Piloting Augmented Reality Technology to Enhance Realism in Clinical Simulation.
Vaughn, Jacqueline; Lister, Michael; Shaw, Ryan J
2016-09-01
We describe a pilot study that incorporated an innovative hybrid simulation designed to increase the perception of realism in a high-fidelity simulation. Prelicensure students (N = 12) cared for a manikin in a simulation lab scenario wearing Google Glass, a wearable head device that projected video into the students' field of vision. Students reported that the simulation gave them confidence that they were developing skills and knowledge to perform necessary tasks in a clinical setting and that they met the learning objectives of the simulation. The video combined visual images and cues seen in a real patient and created a sense of realism the manikin alone could not provide.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
Subsurface data visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul
2017-04-01
Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing can be done in any direction using a 'virtual knife'. Future plans are to further improve performance from 30 up to 90 Hz update rate to reduce possible motion sickness, add more advanced filtering capabilities as well as a multi user setup, annotation capabilities and visualizing of historical data.
Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization
2017-08-01
visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
A collaborative interaction and visualization multi-modal environment for surgical planning.
Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot
2009-01-01
The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.
Visualizing Sea Level Rise with Augmented Reality
NASA Astrophysics Data System (ADS)
Kintisch, E. S.
2013-12-01
Looking Glass is an application on the iPhone that visualizes in 3-D future scenarios of sea level rise, overlaid on live camera imagery in situ. Using a technology known as augmented reality, the app allows a layperson user to explore various scenarios of sea level rise using a visual interface. Then the user can see, in an immersive, dynamic way, how those scenarios would affect a real place. The first part of the experience activates users' cognitive, quantitative thinking process, teaching them how global sea level rise, tides and storm surge contribute to flooding; the second allows an emotional response to a striking visual depiction of possible future catastrophe. This project represents a partnership between a science journalist, MIT, and the Rhode Island School of Design, and the talk will touch on lessons this projects provides on structuring and executing such multidisciplinary efforts on future design projects.
Animation Augmented Reality Book Model (AAR Book Model) to Enhance Teamwork
ERIC Educational Resources Information Center
Chujitarom, Wannaporn; Piriyasurawong, Pallop
2017-01-01
This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Virtual reality and 3D animation in forensic visualization.
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
2010-09-01
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Yamamoto, Shoji; Hosokawa, Natsumi; Yokoya, Mayu; Tsumura, Norimichi
2016-12-01
In this paper, we investigated the consistency of visual perception for the change of reflection images in an augmented reality setting. Reflection images with distortion and magnification were generated by changing the capture position of the environment map. Observers evaluated the distortion and magnification in reflection images where the reflected objects were arranged symmetrically or asymmetrically. Our results confirmed that the observers' visual perception was more sensitive to changes in distortion than in magnification in the reflection images. Moreover, the asymmetrical arrangement of reflected objects effectively expands the acceptable range of distortion compared with the symmetrical arrangement.
Dinka, David; Nyce, James M; Timpka, Toomas
2009-06-01
The aim of this study was to investigate how the clinical use of visualization technology can be advanced by the application of a situated cognition perspective. The data were collected in the GammaKnife radiosurgery setting and analyzed using qualitative methods. Observations and in-depth interviews with neurosurgeons and physicists were performed at three clinics using the Leksell GammaKnife. The users' ability to perform cognitive tasks was found to be reduced each time visualizations incongruent with the particular user's perception of clinical reality were used. The main issue here was a lack of transparency, i.e. a black box problem where machine representations "stood between" users and the cognitive tasks they wanted to perform. For neurosurgeons, transparency meant their previous experience from traditional surgery could be applied, i.e. that they were not forced to perform additional cognitive work. From the view of the physicists, on the other hand, the concept of transparency was associated with mathematical precision and avoiding creating a cognitive distance between basic patient data and what is experienced as clinical reality. The physicists approached clinical visualization technology as though it was a laboratory apparatus--one that required continual adjustment and assessment in order to "capture" a quantitative clinical reality. Designers of visualization technology need to compare the cognitive interpretations generated by the new visualization systems to conceptions generated during "traditional" clinical work. This means that the viewpoint of different clinical user groups involved in a given clinical task would have to be taken into account as well. A way forward would be to acknowledge that visualization is a socio-cognitive function that has practice-based antecedents and consequences, and to reconsider what analytical and scientific challenges this presents us with.
ERIC Educational Resources Information Center
Duncan, Mike R.; Birrell, Bob; Williams, Toni
2005-01-01
Virtual Reality (VR) is primarily a visual technology. Elements such as haptics (touch feedback) and sound can augment an experience, but the visual cues are the prime driver of what an audience will experience from a VR presentation. At its inception in 2001 the Centre for Advanced Visualization (CFAV) at Niagara College of Arts and Technology…
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
ERIC Educational Resources Information Center
Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen
2014-01-01
Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…
Confronting an Augmented Reality
ERIC Educational Resources Information Center
Munnerley, Danny; Bacon, Matt; Wilson, Anna; Steele, James; Hedberg, John; Fitzgerald, Robert
2012-01-01
How can educators make use of augmented reality technologies and practices to enhance learning and why would we want to embrace such technologies anyway? How can an augmented reality help a learner confront, interpret and ultimately comprehend reality itself ? In this article, we seek to initiate a discussion that focuses on these questions, and…
Haeger, Mathias; Bock, Otmar; Memmert, Daniel; Hüttermann, Stefanie
2018-01-01
Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women) took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week) on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.
Advanced Visual and Instruction Systems for Maintenance Support (AVIS-MS)
2006-12-01
Hayashi , "Augmentable Reality: Situated Communication through Physical and Digital Spaces," Proc. 2nd Int’l Symp. Wearable Computers, IEEE CS Press...H. Ohno , "An Optical See-through Display for Mutual Occlusion of Real and Virtual Environments," Proc. Int’l Symp. Augmented Reality 2000 (ISARO0
Immersive Training Systems: Virtual Reality and Education and Training.
ERIC Educational Resources Information Center
Psotka, Joseph
1995-01-01
Describes virtual reality (VR) technology and VR research on education and training. Focuses on immersion as the key added value of VR, analyzes cognitive variables connected to immersion, how it is generated in synthetic environments and its benefits. Discusses value of tracked, immersive visual displays over nonimmersive simulations. Contains 78…
Virtual Reality Website of Indonesia National Monument and Its Environment
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.
2017-02-01
National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.
Applied Augmented Reality for High Precision Maintenance
NASA Astrophysics Data System (ADS)
Dever, Clark
Augmented Reality had a major consumer breakthrough this year with Pokemon Go. The underlying technologies that made that app a success with gamers can be applied to improve the efficiency and efficacy of workers. This session will explore some of the use cases for augmented reality in an industrial environment. In doing so, the environmental impacts and human factors that must be considered will be explored. Additionally, the sensors, algorithms, and visualization techniques used to realize augmented reality will be discussed. The benefits of augmented reality solutions in industrial environments include automated data recording, improved quality assurance, reduction in training costs and improved mean-time-to-resolution. As technology continues to follow Moore's law, more applications will become feasible as performance-per-dollar increases across all system components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Song
CFD (Computational Fluid Dynamics) is a widely used technique in engineering design field. It uses mathematical methods to simulate and predict flow characteristics in a certain physical space. Since the numerical result of CFD computation is very hard to understand, VR (virtual reality) and data visualization techniques are introduced into CFD post-processing to improve the understandability and functionality of CFD computation. In many cases CFD datasets are very large (multi-gigabytes), and more and more interactions between user and the datasets are required. For the traditional VR application, the limitation of computing power is a major factor to prevent visualizing largemore » dataset effectively. This thesis presents a new system designing to speed up the traditional VR application by using parallel computing and distributed computing, and the idea of using hand held device to enhance the interaction between a user and VR CFD application as well. Techniques in different research areas including scientific visualization, parallel computing, distributed computing and graphical user interface designing are used in the development of the final system. As the result, the new system can flexibly be built on heterogeneous computing environment, dramatically shorten the computation time.« less
Personalized augmented reality for anatomy education.
Ma, Meng; Fallavollita, Pascal; Seelbach, Ina; Von Der Heide, Anna Maria; Euler, Ekkehard; Waschke, Jens; Navab, Nassir
2016-05-01
Anatomy education is a challenging but vital element in forming future medical professionals. In this work, a personalized and interactive augmented reality system is developed to facilitate education. This system behaves as a "magic mirror" which allows personalized in-situ visualization of anatomy on the user's body. Real-time volume visualization of a CT dataset creates the illusion that the user can look inside their body. The system comprises a RGB-D sensor as a real-time tracking device to detect the user moving in front of a display. In addition, the magic mirror system shows text information, medical images, and 3D models of organs that the user can interact with. Through the participation of 7 clinicians and 72 students, two user studies were designed to respectively assess the precision and acceptability of the magic mirror system for education. The results of the first study demonstrated that the average precision of the augmented reality overlay on the user body was 0.96 cm, while the results of the second study indicate 86.1% approval for the educational value of the magic mirror, and 91.7% approval for the augmented reality capability of displaying organs in three dimensions. The usefulness of this unique type of personalized augmented reality technology has been demonstrated in this paper. © 2015 Wiley Periodicals, Inc.
Martín-Pascual, Miguel Ángel; Gruart, Agnès; Delgado-García, José María
2017-01-01
This article explores whether there are differences in visual perception of narrative between theatrical performances and screens, and whether media professionalization affects visual perception. We created a live theatrical stimulus and three audio-visual stimuli (each one with a different video editing style) having the same narrative, and displayed them randomly to participants (20 media professionals and 20 non-media professionals). For media professionals, watching movies on screens evoked a significantly lower spontaneous blink rate (SBR) than looking at theatrical performances. Media professionals presented a substantially lower SBR than non-media professionals when watching screens, and more surprisingly, also when seeing reality. According to our results, media professionals pay higher attention to both screens and the real world than do non-media professionals. PMID:28467449
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Mouraux, Dominique; Brassinne, Eric; Sobczak, Stéphane; Nonclercq, Antoine; Warzée, Nadine; Sizer, Phillip S; Tuna, Turgay; Penelle, Benoît
2017-07-01
Objective: We assessed whether or not pain relief could be achieved with a new system that combines 3D augmented reality system (3DARS) and the principles of mirror visual feedback. Methods: Twenty-two patients between 18 and 75 years of age who suffered of chronic neuropathic pain. Each patient performed five 3DARS sessions treatment of 20 mins spread over a period of one week. The following pain parameters were assessed: (1) visual analogic scale after each treatment session (2) McGill pain scale and DN4 questionnaire were completed before the first session and 24 h after the last session. Results: The mean improvement of VAS per session was 29% ( p < 0.001). There was an immediate session effect demonstrating a systematic improvement in pain between the beginning and the end of each session. We noted that this pain reduction was partially preserved until the next session. If we compare the pain level at baseline and 24 h after the last session, there was a significant decrease ( p < 0.001) of pain of 37%. There was a significant decrease ( p < 0.001) on the McGill Pain Questionnaire and DN4 questionnaire ( p < 0.01). Conclusion: Our results indicate that 3DARS induced a significant pain decrease for patients who presented chronic neuropathic pain in a unilateral upper extremity. While further research is necessary before definitive conclusions can be drawn, clinicians could implement the approach as a preparatory adjunct for providing temporary pain relief aimed at enhancing chronic pain patients' tolerance of manual therapy and exercise intervention. Level of Evidence: 4.
New Thermal Taste Actuation Technology for Future Multisensory Virtual Reality and Internet.
Karunanayaka, Kasun; Johari, Nurafiqah; Hariri, Surina; Camelia, Hanis; Bielawski, Kevin Stanley; Cheok, Adrian David
2018-04-01
Today's virtual reality (VR) applications such as gaming, multisensory entertainment, remote dining, and online shopping are mainly based on audio, visual, and touch interactions between humans and virtual worlds. Integrating the sense of taste into VR is difficult since humans are dependent on chemical-based taste delivery systems. This paper presents the 'Thermal Taste Machine', a new digital taste actuation technology that can effectively produce and modify thermal taste sensations on the tongue. It modifies the temperature of the surface of the tongue within a short period of time (from 25°C to 40 °C while heating, and from 25°C to 10 °C while cooling). We tested this device on human subjects and described the experience of thermal taste using 20 known (taste and non-taste) sensations. Our results suggested that rapidly heating the tongue produces sweetness, fatty/oiliness, electric taste, warmness, and reduces the sensibility for metallic taste. Similarly, cooling the tongue produced mint taste, pleasantness, and coldness. By conducting another user study on the perceived sweetness of sucrose solutions after the thermal stimulation, we found that heating the tongue significantly enhances the intensity of sweetness for both thermal tasters and non-thermal tasters. Also, we found that faster temperature rises on the tongue produce more intense sweet sensations for thermal tasters. This technology will be useful in two ways: First, it can produce taste sensations without using chemicals for the individuals who are sensitive to thermal taste. Second, the temperature rise of the device can be used as a way to enhance the intensity of sweetness. We believe that this technology can be used to digitally produce and enhance taste sensations in future virtual reality applications. The key novelties of this paper are as follows: 1. Development of a thermal taste actuation technology for stimulating the human taste receptors, 2. Characterization of the thermal taste produced by the device using taste-related sensations and non-taste related sensations, 3. Research on enhancing the intensity for sucrose solutions using thermal stimulation, 4. Research on how different speeds of heating affect the intensity of sweetness produced by thermal stimulation.
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display.
Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan
2016-01-01
Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainee's field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainee's viewpoint.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency.
Palmisano, Stephen; Riecke, Bernhard E
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ("vection"). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost.
The search for instantaneous vection: An oscillating visual prime reduces vection onset latency
Riecke, Bernhard E.
2018-01-01
Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion (“vection”). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost. PMID:29791445
Real-time 3D image reconstruction guidance in liver resection surgery.
Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-04-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.
The Potential of Using Virtual Reality Technology in Physical Activity Settings
ERIC Educational Resources Information Center
Pasco, Denis
2013-01-01
In recent years, virtual reality technology has been successfully used for learning purposes. The purposes of the article are to examine current research on the role of virtual reality in physical activity settings and discuss potential application of using virtual reality technology to enhance learning in physical education. The article starts…
The Role of Visualization in Learning from Computer-Based Images. Research Report
ERIC Educational Resources Information Center
Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.
2005-01-01
Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and…
Reality Check: Basics of Augmented, Virtual, and Mixed Reality.
Brigham, Tara J
2017-01-01
Augmented, virtual, and mixed reality applications all aim to enhance a user's current experience or reality. While variations of this technology are not new, within the last few years there has been a significant increase in the number of artificial reality devices or applications available to the general public. This column will explain the difference between augmented, virtual, and mixed reality and how each application might be useful in libraries. It will also provide an overview of the concerns surrounding these different reality applications and describe how and where they are currently being used.
Revolutionizing Education: The Promise of Virtual Reality
ERIC Educational Resources Information Center
Gadelha, Rene
2018-01-01
Virtual reality (VR) has the potential to revolutionize education, as it immerses students in their learning more than any other available medium. By blocking out visual and auditory distractions in the classroom, it has the potential to help students deeply connect with the material they are learning in a way that has never been possible before.…
Images as Representations: Visual Sources on Education and Childhood in the Past
ERIC Educational Resources Information Center
Dekker, Jeroen J.H.
2015-01-01
The challenge of using images for the history of education and childhood will be addressed in this article by looking at them as representations. Central is the relationship between representations and reality. The focus is on the power of paintings as representations of aspects of realities. First the meaning of representation for images as…
Linking Audio and Visual Information while Navigating in a Virtual Reality Kiosk Display
ERIC Educational Resources Information Center
Sullivan, Briana; Ware, Colin; Plumlee, Matthew
2006-01-01
3D interactive virtual reality museum exhibits should be easy to use, entertaining, and informative. If the interface is intuitive, it will allow the user more time to learn the educational content of the exhibit. This research deals with interface issues concerning activating audio descriptions of images in such exhibits while the user is…
ERIC Educational Resources Information Center
Fominykh, Mikhail; Prasolova-Førland, Ekaterina; Stiles, Tore C.; Krogh, Anne Berit; Linde, Mattias
2018-01-01
This paper presents a concept for designing low-cost therapeutic training with biofeedback and virtual reality. We completed the first evaluation of a prototype--a mobile learning application for relaxation training, primarily for adolescents suffering from tension-type headaches. The system delivers visual experience on a head-mounted display. A…
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Measuring Visual Displays’ Effect on Novice Performance in Door Gunnery
2014-12-01
training in a mixed reality simulation. Specifically, we examined the effect that different visual displays had on novice soldier performance; qualified...there was a main effect of visual display on performance. However, both visual treatment groups experienced the same degree of presence and simulator... The purpose of this paper is to present the results of our recent experimentation involving a novice population performing aerial door gunnery
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.
1998-01-05
An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.
Depictions of substance use in reality television: a content analysis of The Osbournes.
Blair, Nicole A; Yue, So Kuen; Singh, Ranbir; Bernhardt, Jay M
2005-12-24
To determine the source and slant of messages in a reality television programme that may promote or inhibit health related or risky behaviours. Coding visual and verbal references to alcohol, tobacco, and other drug (ATOD) use in The Osbournes. Three reviewers watched all 10 episodes of the first season and coded incidents of substance use according to the substance used (alcohol, tobacco, or drugs), the way use was portrayed (visually or verbally), the source of the message (the character in the show involved in the incident), and the slant of the incident (endorsement or rejection). The variation in number of messages in an average episode, the slant of messages, and message source. The average number of messages per episode was 9.1 (range 2-17). Most drug use messages (15, 54%) implied rejection of drugs, but most alcohol messages (30, 64%) and tobacco messages (12, 75%) implied endorsements for using these substances. Most rejections (34, 94%) were conveyed verbally, but most endorsements (36, 65%) were conveyed visually. Messages varied in frequency and slant by source. The reality television show analysed in this study contains numerous messages on substance use that imply both rejection and endorsement of use. The juxtaposition of verbal rejection messages and visual endorsement messages, and the depiction of contradictory messages about substance use from show characters, may send mixed messages to viewers about substance use.
Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?
Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming
2017-01-01
A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males' mental workloads were significantly higher than females'. For males, high-value products' mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio-visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio-visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference.
Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?
Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming
2017-01-01
A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males’ mental workloads were significantly higher than females’. For males, high-value products’ mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio–visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio–visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference. PMID:28184207
The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning
ERIC Educational Resources Information Center
Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar
2017-01-01
Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…
Augmented Reality as a Countermeasure for Sleep Deprivation.
Baumeister, James; Dorrlan, Jillian; Banks, Siobhan; Chatburn, Alex; Smith, Ross T; Carskadon, Mary A; Lushington, Kurt; Thomas, Bruce H
2016-04-01
Sleep deprivation is known to have serious deleterious effects on executive functioning and job performance. Augmented reality has an ability to place pertinent information at the fore, guiding visual focus and reducing instructional complexity. This paper presents a study to explore how spatial augmented reality instructions impact procedural task performance on sleep deprived users. The user study was conducted to examine performance on a procedural task at six time points over the course of a night of total sleep deprivation. Tasks were provided either by spatial augmented reality-based projections or on an adjacent monitor. The results indicate that participant errors significantly increased with the monitor condition when sleep deprived. The augmented reality condition exhibited a positive influence with participant errors and completion time having no significant increase when sleep deprived. The results of our study show that spatial augmented reality is an effective sleep deprivation countermeasure under laboratory conditions.
Applying Augmented Reality in practical classes for engineering students
NASA Astrophysics Data System (ADS)
Bazarov, S. E.; Kholodilin, I. Yu; Nesterov, A. S.; Sokhina, A. V.
2017-10-01
In this article the Augmented Reality application for teaching engineering students of electrical and technological specialties is introduced. In order to increase the motivation for learning and the independence of students, new practical guidelines on Augmented Reality were developed in the application to practical classes. During the application development, the authors used software such as Unity 3D and Vuforia. The Augmented Reality content consists of 3D-models, images and animations, which are superimposed on real objects, helping students to study specific tasks. A user who has a smartphone, a tablet PC, or Augmented Reality glasses can visualize on-screen virtual objects added to a real environment. Having analyzed the current situation in higher education: the learner’s interest in studying, their satisfaction with the educational process, and the impact of the Augmented Reality application on students, a questionnaire was developed and offered to students; the study involved 24 learners.
Climate Science Communications - Video Visualization Techniques
NASA Astrophysics Data System (ADS)
Reisman, J. P.; Mann, M. E.
2010-12-01
Communicating Climate science is challenging due to it's complexity. But as they say, a picture is worth a thousand words. Visualization techniques can be merely graphical or combine multimedia so as to make graphs come alive in context with other visual and auditory cues. This can also make the information come alive in a way that better communicates what the science is all about. What types of graphics to use depends on your audience, some graphs are great for scientists but if you are trying to communicate to a less sophisticated audience, certain visuals translate information in a more easily perceptible manner. Hollywood techniques and style can be applied to these graphs to give them even more impact. Video is one of the most powerful communication tools in its ability to combine visual and audio through time. Adding music and visual cues such as pans and zooms can greatly enhance the ability to communicate your concepts. Video software ranges from relatively simple to very sophisticated. In reality, you don't need the best tools to get your point across. In fact, with relatively inexpensive software, you can put together powerful videos that more effectively convey the science you are working on with greater sophistication, and in an entertaining way. We will examine some basic techniques to increase the quality of video visualization to make it more effective in communicating complexity. If a picture is worth a thousand words, a decent video with music, and a bit of narration is priceless.
Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.
Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir
2016-06-01
This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.
An indoor augmented reality mobile application for simulation of building evacuation
NASA Astrophysics Data System (ADS)
Sharma, Sharad; Jerripothula, Shanmukha
2015-03-01
Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.
Transforming Polar Research with Google Glass Augmented Reality (Invited)
NASA Astrophysics Data System (ADS)
Ruthkoski, T.
2013-12-01
Augmented reality is a new technology with the potential to accelerate the advancement of science, particularly in geophysical research. Augmented reality is defined as a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. When paired with advanced computing techniques on cloud resources, augmented reality has the potential to improve data collection techniques, visualizations, as well as in-situ analysis for many areas of research. Google is currently a pioneer of augmented reality technology and has released beta versions of their wearable computing device, Google Glass, to a select number of developers and beta testers. This community of 'Glass Explorers' is the vehicle from which Google shapes the future of their augmented reality device. Example applications of Google Glass in geophysical research range from use as a data gathering interface in harsh climates to an on-site visualization and analysis tool. Early participation in the shaping of the Google Glass device is an opportunity for researchers to tailor this new technology to their specific needs. The purpose of this presentation is to provide geophysical researchers with a hands-on first look at Google Glass and its potential as a scientific tool. Attendees will be given an overview of the technical specifications as well as a live demonstration of the device. Potential applications to geophysical research in polar regions will be the primary focus. The presentation will conclude with an open call to participate, during which attendees may indicate interest in developing projects that integrate Google Glass into their research. Application Mockup: Penguin Counter Google Glass Augmented Reality Device
Transforming Polar Research with Google Glass Augmented Reality (Invited)
NASA Astrophysics Data System (ADS)
Ramachandran, R.; McEniry, M.; Maskey, M.
2011-12-01
Augmented reality is a new technology with the potential to accelerate the advancement of science, particularly in geophysical research. Augmented reality is defined as a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. When paired with advanced computing techniques on cloud resources, augmented reality has the potential to improve data collection techniques, visualizations, as well as in-situ analysis for many areas of research. Google is currently a pioneer of augmented reality technology and has released beta versions of their wearable computing device, Google Glass, to a select number of developers and beta testers. This community of 'Glass Explorers' is the vehicle from which Google shapes the future of their augmented reality device. Example applications of Google Glass in geophysical research range from use as a data gathering interface in harsh climates to an on-site visualization and analysis tool. Early participation in the shaping of the Google Glass device is an opportunity for researchers to tailor this new technology to their specific needs. The purpose of this presentation is to provide geophysical researchers with a hands-on first look at Google Glass and its potential as a scientific tool. Attendees will be given an overview of the technical specifications as well as a live demonstration of the device. Potential applications to geophysical research in polar regions will be the primary focus. The presentation will conclude with an open call to participate, during which attendees may indicate interest in developing projects that integrate Google Glass into their research. Application Mockup: Penguin Counter Google Glass Augmented Reality Device
Visualizing planetary data by using 3D engines
NASA Astrophysics Data System (ADS)
Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.
2017-09-01
We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.
Time Series Data Visualization in World Wide Telescope
NASA Astrophysics Data System (ADS)
Fay, J.
WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.
Improving the Efficiency of Virtual Reality Training by Integrating Partly Observational Learning
ERIC Educational Resources Information Center
Yuviler-Gavish, Nirit; Rodríguez, Jorge; Gutiérrez, Teresa; Sánchez, Emilio; Casado, Sara
2014-01-01
The current study hypothesized that integrating partly observational learning into virtual reality training systems (VRTS) can enhance training efficiency for procedural tasks. A common approach in designing VRTS is the enactive approach, which stresses the importance of physical actions within the environment to enhance perception and improve…
CARE: Creating Augmented Reality in Education
ERIC Educational Resources Information Center
Latif, Farzana
2012-01-01
This paper explores how Augmented Reality using mobile phones can enhance teaching and learning in education. It specifically examines its application in two cases, where it is identified that the agility of mobile devices and the ability to overlay context specific resources offers opportunities to enhance learning that would not otherwise exist.…
ERIC Educational Resources Information Center
Cheng, Kun-Hung; Tsai, Chin-Chung
2016-01-01
Following a previous study (Cheng & Tsai, 2014. "Computers & Education"), this study aimed to probe the interaction of child-parent shared reading with the augmented reality (AR) picture book in more depth. A series of sequential analyses were thus conducted to infer the behavioral transition diagrams and visualize the continuity…
The Effect of an Augmented Reality Enhanced Mathematics Lesson on Student Achievement and Motivation
ERIC Educational Resources Information Center
Estapa, Anne; Nadolny, Larysa
2015-01-01
The purpose of the study was to assess student achievement and motivation during a high school augmented reality mathematics activity focused on dimensional analysis. Included in this article is a review of the literature on the use of augmented reality in mathematics and the combination of print with augmented reality, also known as interactive…
Virtual surgery in a (tele-)radiology framework.
Glombitza, G; Evers, H; Hassfeld, S; Engelmann, U; Meinzer, H P
1999-09-01
This paper presents telemedicine as an extension of a teleradiology framework through tools for virtual surgery. To classify the described methods and applications, the research field of virtual reality (VR) is broadly reviewed. Differences with respect to technical equipment, methodological requirements and areas of application are pointed out. Desktop VR, augmented reality, and virtual reality are differentiated and discussed in some typical contexts of diagnostic support, surgical planning, therapeutic procedures, simulation and training. Visualization techniques are compared as a prerequisite for virtual reality and assigned to distinct levels of immersion. The advantage of a hybrid visualization kernel is emphasized with respect to the desktop VR applications that are subsequently shown. Moreover, software design aspects are considered by outlining functional openness in the architecture of the host system. Here, a teleradiology workstation was extended by dedicated tools for surgical planning through a plug-in mechanism. Examples of recent areas of application are introduced such as liver tumor resection planning, diagnostic support in heart surgery, and craniofacial surgery planning. In the future, surgical planning systems will become more important. They will benefit from improvements in image acquisition and communication, new image processing approaches, and techniques for data presentation. This will facilitate preoperative planning and intraoperative applications.
Virtual reality simulators and training in laparoscopic surgery.
Yiannakopoulou, Eugenia; Nikiteas, Nikolaos; Perrea, Despina; Tsigris, Christos
2015-01-01
Virtual reality simulators provide basic skills training without supervision in a controlled environment, free of pressure of operating on patients. Skills obtained through virtual reality simulation training can be transferred on the operating room. However, relative evidence is limited with data available only for basic surgical skills and for laparoscopic cholecystectomy. No data exist on the effect of virtual reality simulation on performance on advanced surgical procedures. Evidence suggests that performance on virtual reality simulators reliably distinguishes experienced from novice surgeons Limited available data suggest that independent approach on virtual reality simulation training is not different from proctored approach. The effect of virtual reality simulators training on acquisition of basic surgical skills does not seem to be different from the effect the physical simulators. Limited data exist on the effect of virtual reality simulation training on the acquisition of visual spatial perception and stress coping skills. Undoubtedly, virtual reality simulation training provides an alternative means of improving performance in laparoscopic surgery. However, future research efforts should focus on the effect of virtual reality simulation on performance in the context of advanced surgical procedure, on standardization of training, on the possibility of synergistic effect of virtual reality simulation training combined with mental training, on personalized training. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Berryman, Donna R
2012-01-01
Augmented reality is a technology that overlays digital information on objects or places in the real world for the purpose of enhancing the user experience. It is not virtual reality, that is, the technology that creates a totally digital or computer created environment. Augmented reality, with its ability to combine reality and digital information, is being studied and implemented in medicine, marketing, museums, fashion, and numerous other areas. This article presents an overview of augmented reality, discussing what it is, how it works, its current implementations, and its potential impact on libraries.
Nishiike, Suetaka; Okazaki, Suzuyo; Watanabe, Hiroshi; Akizuki, Hironori; Imai, Takao; Uno, Atsuhiko; Kitahara, Tadashi; Horii, Arata; Takeda, Noriaki; Inohara, Hidenori
2013-01-01
In this study, we examined the effects of sensory inputs of visual-vestibulosomatosensory conflict induced by virtual reality (VR) on subjective dizziness, posture stability and visual dependency on postural control in humans. Eleven healthy young volunteers were immersed in two different VR conditions. In the control condition, subjects walked voluntarily with the background images of interactive computer graphics proportionally synchronized to their walking pace. In the visual-vestibulosomatosensory conflict condition, subjects kept still, but the background images that subjects experienced in the control condition were presented. The scores of both Graybiel's and Hamilton's criteria, postural instability and Romberg ratio were measured before and after the two conditions. After immersion in the conflict condition, both subjective dizziness and objective postural instability were significantly increased, and Romberg ratio, an index of the visual dependency on postural control, was slightly decreased. These findings suggest that sensory inputs of visual-vestibulosomatosensory conflict induced by VR induced motion sickness, resulting in subjective dizziness and postural instability. They also suggest that adaptation to the conflict condition decreases the contribution of visual inputs to postural control with re-weighing of vestibulosomatosensory inputs. VR may be used as a rehabilitation tool for dizzy patients by its ability to induce sensory re-weighing of postural control.
Stepping Into Science Data: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.
2017-12-01
Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.
Mazerand, Edouard; Le Renard, Marc; Hue, Sophie; Lemée, Jean-Michel; Klinger, Evelyne; Menei, Philippe
2017-01-01
Brain mapping during awake craniotomy is a well-known technique to preserve neurological functions, especially the language. It is still challenging to map the optic radiations due to the difficulty to test the visual field intraoperatively. To assess the visual field during awake craniotomy, we developed the Functions' Explorer based on a virtual reality headset (FEX-VRH). The impaired visual field of 10 patients was tested with automated perimetry (the gold standard examination) and the FEX-VRH. The proof-of-concept test was done during the surgery performed on a patient who was blind in his right eye and presenting with a left parietotemporal glioblastoma. The FEX-VRH was used intraoperatively, simultaneously with direct subcortical electrostimulation, allowing identification and preservation of the optic radiations. The FEX-VRH detected 9 of the 10 visual field defects found by automated perimetry. The patient who underwent an awake craniotomy with intraoperative mapping of the optic tract using the FEX-VRH had no permanent postoperative visual field defect. Intraoperative visual field assessment with the FEX-VRH during direct subcortical electrostimulation is a promising approach to mapping the optical radiations and preventing a permanent visual field defect during awake surgery for epilepsy or tumor. Copyright © 2016 Elsevier Inc. All rights reserved.
Virtual reality enhanced mannequin (VREM) that is well received by resuscitation experts.
Semeraro, Federico; Frisoli, Antonio; Bergamasco, Massimo; Cerchiari, Erga L
2009-04-01
The objective of this study was to test acceptance of, and interest in, a newly developed prototype of virtual reality enhanced mannequin (VREM) on a sample of congress attendees who volunteered to participate in the evaluation session and to respond to a specifically designed questionnaire. A commercial Laerdal HeartSim 4000 mannequin was developed to integrate virtual reality (VR) technologies with specially developed virtual reality software to increase the immersive perception of emergency scenarios. To evaluate the acceptance of a virtual reality enhanced mannequin (VREM), we presented it to a sample of 39 possible users. Each evaluation session involved one trainee and two instructors with a standardized procedure and scenario: the operator was invited by the instructor to wear the data-gloves and the head mounted display and was briefly introduced to the scope of the simulation. The instructor helped the operator familiarize himself with the environment. After the patient's collapse, the operator was asked to check the patient's clinical conditions and start CPR. Finally, the patient started to recover signs of circulation and the evaluation session was concluded. Each participant was then asked to respond to a questionnaire designed to explore the trainee's perception in the areas of user-friendliness, realism, and interaction/immersion. Overall, the evaluation of the system was very positive, as was the feeling of immersion and realism of the environment and simulation. Overall, 84.6% of the participants judged the virtual reality experience as interesting and believed that its development could be very useful for healthcare training. The prototype of the virtual reality enhanced mannequin was well-liked, without interfence by interaction devices, and deserves full technological development and validation in emergency medical training.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
Tomson, Tanja; Zary, Nabil
2014-01-01
Introduction. Antimicrobial resistance is a global health issue. Studies have shown that improved antibiotic prescription education among healthcare professionals reduces mistakes during the antibiotic prescription process. The aim of this study was to investigate novel educational approaches that through the use of Augmented Reality technology could make use of the real physical context and thereby enrich the educational process of antibiotics prescription. The objective is to investigate which type of information related to antibiotics could be used in an augmented reality application for antibiotics education. Methods. This study followed the Design-Based Research Methodology composed of the following main steps: problem analysis, investigation of information that should be visualized for the training session, and finally the involvement of the end users the development and evaluation processes of the prototype. Results. Two of the most important aspects in the antibiotic prescription process, to represent in an augmented reality application, are the antibiotic guidelines and the side effects. Moreover, this study showed how this information could be visualized from a mobile device using an Augmented Reality scanner and antibiotic drug boxes as markers. Discussion. In this study we investigated the usage of objects from a real physical context such as drug boxes and how they could be used as educational resources. The logical next steps are to examine how this approach of combining physical and virtual contexts through Augmented Reality applications could contribute to the improvement of competencies among healthcare professionals and its impact on the decrease of antibiotics resistance. PMID:25548733
Nifakos, Sokratis; Tomson, Tanja; Zary, Nabil
2014-01-01
Introduction. Antimicrobial resistance is a global health issue. Studies have shown that improved antibiotic prescription education among healthcare professionals reduces mistakes during the antibiotic prescription process. The aim of this study was to investigate novel educational approaches that through the use of Augmented Reality technology could make use of the real physical context and thereby enrich the educational process of antibiotics prescription. The objective is to investigate which type of information related to antibiotics could be used in an augmented reality application for antibiotics education. Methods. This study followed the Design-Based Research Methodology composed of the following main steps: problem analysis, investigation of information that should be visualized for the training session, and finally the involvement of the end users the development and evaluation processes of the prototype. Results. Two of the most important aspects in the antibiotic prescription process, to represent in an augmented reality application, are the antibiotic guidelines and the side effects. Moreover, this study showed how this information could be visualized from a mobile device using an Augmented Reality scanner and antibiotic drug boxes as markers. Discussion. In this study we investigated the usage of objects from a real physical context such as drug boxes and how they could be used as educational resources. The logical next steps are to examine how this approach of combining physical and virtual contexts through Augmented Reality applications could contribute to the improvement of competencies among healthcare professionals and its impact on the decrease of antibiotics resistance.
Immersive Visualization of the Solid Earth
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
ERIC Educational Resources Information Center
Orman, Evelyn K.; Price, Harry E.; Russell, Christine R.
2017-01-01
Acquiring nonverbal skills necessary to appropriately communicate and educate members of performing ensembles is essential for wind band conductors. Virtual reality learning environments (VRLEs) provide a unique setting for developing these proficiencies. For this feasibility study, we used an augmented immersive VRLE to enhance eye contact, torso…
What Teachers Need to Know about Augmented Reality Enhanced Learning Environments
ERIC Educational Resources Information Center
Wasko, Christopher
2013-01-01
Augmented reality (AR) enhanced learning environments have been designed to teach a variety of subjects by having learners act like professionals in the field as opposed to students in a classroom. The environments, grounded in constructivist and situated learning theories, place students in a meaningful, non-classroom environment and force them…
Use of Virtual Reality Technology to Enhance Undergraduate Learning in Abnormal Psychology
ERIC Educational Resources Information Center
Stark-Wroblewski, Kim; Kreiner, David S.; Boeding, Christopher M.; Lopata, Ashley N.; Ryan, Joseph J.; Church, Tina M.
2008-01-01
We examined whether using virtual reality (VR) technology to provide students with direct exposure to evidence-based psychological treatment approaches would enhance their understanding of and appreciation for such treatments. Students enrolled in an abnormal psychology course participated in a VR session designed to help clients overcome the fear…
An animated depiction of major depression epidemiology.
Patten, Scott B
2007-06-08
Epidemiologic estimates are now available for a variety of parameters related to major depression epidemiology (incidence, prevalence, etc.). These estimates are potentially useful for policy and planning purposes, but it is first necessary that they be synthesized into a coherent picture of the epidemiology of the condition. Several attempts to do so have been made using mathematical modeling procedures. However, this information is not easy to communicate to users of epidemiological data (clinicians, administrators, policy makers). In this study, up-to-date data on major depression epidemiology were integrated using a discrete event simulation model. The mathematical model was animated in Virtual Reality Modeling Language (VRML) to create a visual, rather than mathematical, depiction of the epidemiology. Consistent with existing literature, the model highlights potential advantages of population health strategies that emphasize access to effective long-term treatment. The paper contains a web-link to the animation. Visual animation of epidemiological results may be an effective knowledge translation tool. In clinical practice, such animations could potentially assist with patient education and enhanced long-term compliance.
Location-Based Learning through Augmented Reality
ERIC Educational Resources Information Center
Chou, Te-Lien; Chanlin, Lih-Juan
2014-01-01
A context-aware and mixed-reality exploring tool cannot only effectively provide an information-rich environment to users, but also allows them to quickly utilize useful resources and enhance environment awareness. This study integrates Augmented Reality (AR) technology into smartphones to create a stimulating learning experience at a university…
Meteorological Data Visualization in Multi-User Virtual Reality
NASA Astrophysics Data System (ADS)
Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.
2017-12-01
Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to further improve the performance to a higher update rate (for the reduction of possible motion sickness) and to add more advanced filtering and annotation capabilities. We are looking for cooperation with data owners with use cases such as the above mentioned. This will help in further improving and developing our tool and to broaden its application into other domains.
Visualizing the process of interaction in a 3D environment
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh
2007-03-01
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
Distributed augmented reality with 3-D lung dynamics--a planning tool concept.
Hamza-Lup, Felix G; Santhanam, Anand P; Imielińska, Celina; Meeks, Sanford L; Rolland, Jannick P
2007-01-01
Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patient's body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype.
Hybrid 3D reconstruction and image-based rendering techniques for reality modeling
NASA Astrophysics Data System (ADS)
Sequeira, Vitor; Wolfart, Erik; Bovisio, Emanuele; Biotti, Ester; Goncalves, Joao G. M.
2000-12-01
This paper presents a component approach that combines in a seamless way the strong features of laser range acquisition with the visual quality of purely photographic approaches. The relevant components of the system are: (i) Panoramic images for distant background scenery where parallax is insignificant; (ii) Photogrammetry for background buildings and (iii) High detailed laser based models for the primary environment, structure of exteriors of buildings and interiors of rooms. These techniques have a wide range of applications in visualization, virtual reality, cost effective as-built analysis of architectural and industrial environments, building facilities management, real-estate, E-commerce, remote inspection of hazardous environments, TV production and many others.
Simulators and virtual reality in surgical education.
Chou, Betty; Handa, Victoria L
2006-06-01
This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.
Curriculum: Managed Visual Reality.
ERIC Educational Resources Information Center
Gueulette, David G.
This essay examines the relationships between teaching and visual media over the last several hundred years, particularly the concept of illusion. The teacher as magician or shaman is explored and compared to contemporary theories of instruction and the use of media such as television, films, and slides. The teacher as artist and alchemist is also…
Media/Visual Literacy Art Education: Cigarette Ad Deconstruction
ERIC Educational Resources Information Center
Chung, Sheng Kuan
2005-01-01
Visual images are not simply embodiments of social reality; they are indeed ideological sites embedded with powerful discursive sociopolitical meanings that exert strong influences on the ways in which people live their lives. The author of this paper describes the Ad-Deconstruction Project, which challenged students to integrate aesthetic…
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
Seeing Is Believing: Using Virtual Reality to Connect the Dots Between Climate Data and Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.
2016-12-01
Companies like Sony, Samsung, Google, and Facebook are heavily investing in virtual reality (VR) for gaming and entertainment, and 2016 marks an important year as many affordable VR headsets are now commercially available. As VR becomes more widely adopted, one question for the science and research community is how VR can be leveraged for practical use. One answer is found in the use of VR for science storytelling and communication. VR has the potential to allow people to experience scientific content in new and engaging ways, including interacting with GIS data. By adapting data sets to create stunning, immersive visualizations and combining them with 360 video, voiceover, music and other video production techniques, we are creating a new paradigm for science communication. 360 VR content is very compelling when viewed in a VR headset and can also be accessed and viewed in a panoramic manner on the internet via websites and social media. We will discuss the proof of concept use case of a short VR 360 video which combines climate data from NASA with 360 video filmed during an extreme weather event (a blizzard). By connecting GIS data with real video footage, the viewer can gain deeper understanding of climate patterns and better comprehend the correlation between data and reality. The positive reaction this VR climate story garnered at events and conferences, such as ESIP, demonstrates the potential for scientists and researchers to communicate results, findings, and data in an engaging format. By combining GIS data and 360 video, this is a significant new approach to enhance the way that science stories are told.
Visualizing UAS-collected imagery using augmented reality
NASA Astrophysics Data System (ADS)
Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.
2017-05-01
One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.
Augmented reality warnings in vehicles: Effects of modality and specificity on effectiveness.
Schwarz, Felix; Fastenmeier, Wolfgang
2017-04-01
In the future, vehicles will be able to warn drivers of hidden dangers before they are visible. Specific warning information about these hazards could improve drivers' reactions and the warning effectiveness, but could also impair them, for example, by additional cognitive-processing costs. In a driving simulator study with 88 participants, we investigated the effects of modality (auditory vs. visual) and specificity (low vs. high) on warning effectiveness. For the specific warnings, we used augmented reality as an advanced technology to display the additional auditory or visual warning information. Part one of the study concentrates on the effectiveness of necessary warnings and part two on the drivers' compliance despite false alarms. For the first warning scenario, we found several positive main effects of specificity. However, subsequent effects of specificity were moderated by the modality of the warnings. The specific visual warnings were observed to have advantages over the three other warning designs concerning gaze and braking reaction times, passing speeds and collision rates. Besides the true alarms, braking reaction times as well as subjective evaluation after these warnings were still improved despite false alarms. The specific auditory warnings were revealed to have only a few advantages, but also several disadvantages. The results further indicate that the exact coding of additional information, beyond its mere amount and modality, plays an important role. Moreover, the observed advantages of the specific visual warnings highlight the potential benefit of augmented reality coding to improve future collision warnings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Depictions of substance use in reality television: a content analysis of The Osbournes
Blair, Nicole A; Yue, So Kuen; Singh, Ranbir; Bernhardt, Jay M
2005-01-01
Objective To determine the source and slant of messages in a reality television programme that may promote or inhibit health related or risky behaviours. Design Coding visual and verbal references to alcohol, tobacco, and other drug (ATOD) use in The Osbournes. Review methods Three reviewers watched all 10 episodes of the first season and coded incidents of substance use according to the substance used (alcohol, tobacco, or drugs), the way use was portrayed (visually or verbally), the source of the message (the character in the show involved in the incident), and the slant of the incident (endorsement or rejection). Main outcome measures The variation in number of messages in an average episode, the slant of messages, and message source. Results The average number of messages per episode was 9.1 (range 2-17). Most drug use messages (15, 54%) implied rejection of drugs, but most alcohol messages (30, 64%) and tobacco messages (12, 75%) implied endorsements for using these substances. Most rejections (34, 94%) were conveyed verbally, but most endorsements (36, 65%) were conveyed visually. Messages varied in frequency and slant by source. Conclusions The reality television show analysed in this study contains numerous messages on substance use that imply both rejection and endorsement of use. The juxtaposition of verbal rejection messages and visual endorsement messages, and the depiction of contradictory messages about substance use from show characters, may send mixed messages to viewers about substance use. PMID:16373737
ERIC Educational Resources Information Center
Sanders, Matthew; Calam, Rachel; Durand, Marianne; Liversidge, Tom; Carmont, Sue Ann
2008-01-01
Background: This study investigated whether providing self-directed and web-based support for parents enhanced the effects of viewing a reality television series based on the Triple P--Positive Parenting Programme. Method: Parents with a child aged 2 to 9 (N=454) were randomly assigned to either a standard or enhanced intervention condition. In…
Virtual reality in radiology: virtual intervention
NASA Astrophysics Data System (ADS)
Harreld, Michael R.; Valentino, Daniel J.; Duckwiler, Gary R.; Lufkin, Robert B.; Karplus, Walter J.
1995-04-01
Intracranial aneurysms are the primary cause of non-traumatic subarachnoid hemorrhage. Morbidity and mortality remain high even with current endovascular intervention techniques. It is presently impossible to identify which aneurysms will grow and rupture, however hemodynamics are thought to play an important role in aneurysm development. With this in mind, we have simulated blood flow in laboratory animals using three dimensional computational fluid dynamics software. The data output from these simulations is three dimensional, complex and transient. Visualization of 3D flow structures with standard 2D display is cumbersome, and may be better performed using a virtual reality system. We are developing a VR-based system for visualization of the computed blood flow and stress fields. This paper presents the progress to date and future plans for our clinical VR-based intervention simulator. The ultimate goal is to develop a software system that will be able to accurately model an aneurysm detected on clinical angiography, visualize this model in virtual reality, predict its future behavior, and give insight into the type of treatment necessary. An associated database will give historical and outcome information on prior aneurysms (including dynamic, structural, and categorical data) that will be matched to any current case, and assist in treatment planning (e.g., natural history vs. treatment risk, surgical vs. endovascular treatment risks, cure prediction, complication rates).
Bosc, R; Fitoussi, A; Pigneur, F; Tacher, V; Hersant, B; Meningaud, J-P
2017-08-01
The augmented reality on smart glasses allows the surgeon to visualize three-dimensional virtual objects during surgery, superimposed in real time to the anatomy of the patient. This makes it possible to preserve the vision of the surgical field and to dispose of added computerized information without the need to use a physical surgical guide or a deported screen. The three-dimensional objects that we used and visualized in augmented reality came from the reconstructions made from the CT-scans of the patients. These objects have been transferred through a dedicated application on stereoscopic smart glasses. The positioning and the stabilization of the virtual layers on the anatomy of the patients were obtained thanks to the recognition, by the glasses, of a tracker placed on the skin. We used this technology, in addition to the usual locating methods for preoperative planning and the selection of perforating vessels for 12 patients operated on a breast reconstruction, by perforating flap of deep lower epigastric artery. The "hands-free" smart glasses with two stereoscopic screens make it possible to provide the reconstructive surgeon with binocular visualization in the operative field of the vessels identified with the CT-scan. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
[Display technologies for augmented reality in medical applications].
Eck, Ulrich; Winkler, Alexander
2018-04-01
One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Mobile Augmented Reality as Usability to Enhance Nurse Prevent Violence Learning Satisfaction.
Hsu, Han-Jen; Weng, Wei-Kai; Chou, Yung-Lang; Huang, Pin-Wei
2018-01-01
Violence in hospitals, nurses are at high risk of patient's aggression in the workplace. This learning course application Mobile Augmented Reality to enhance nurse to prevent violence skill. Increasingly, mobile technologies introduced and integrated into classroom teaching and clinical applications. Improving the quality of learning course and providing new experiences for nurses.
The CAVE (TM) automatic virtual environment: Characteristics and applications
NASA Technical Reports Server (NTRS)
Kenyon, Robert V.
1995-01-01
Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.
Real-time 3D image reconstruction guidance in liver resection surgery
Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-01-01
Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruotolo, Francesco, E-mail: francesco.ruotolo@unina2.it; Maffei, Luigi, E-mail: luigi.maffei@unina2.it; Di Gabriele, Maria, E-mail: maria.digabriele@unina2.it
Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed bymore » means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study environmental impact.« less
Azarnoush, Hamed; Siar, Samaneh; Sawaya, Robin; Zhrani, Gmaan Al; Winkler-Schwartz, Alexander; Alotaibi, Fahad Eid; Bugdadi, Abdulgadir; Bajunaid, Khalid; Marwa, Ibrahim; Sabbagh, Abdulrahman Jafar; Del Maestro, Rolando F
2017-07-01
OBJECTIVE Virtual reality simulators allow development of novel methods to analyze neurosurgical performance. The concept of a force pyramid is introduced as a Tier 3 metric with the ability to provide visual and spatial analysis of 3D force application by any instrument used during simulated tumor resection. This study was designed to answer 3 questions: 1) Do study groups have distinct force pyramids? 2) Do handedness and ergonomics influence force pyramid structure? 3) Are force pyramids dependent on the visual and haptic characteristics of simulated tumors? METHODS Using a virtual reality simulator, NeuroVR (formerly NeuroTouch), ultrasonic aspirator force application was continually assessed during resection of simulated brain tumors by neurosurgeons, residents, and medical students. The participants performed simulated resections of 18 simulated brain tumors with different visual and haptic characteristics. The raw data, namely, coordinates of the instrument tip as well as contact force values, were collected by the simulator. To provide a visual and qualitative spatial analysis of forces, the authors created a graph, called a force pyramid, representing force sum along the z-coordinate for different xy coordinates of the tool tip. RESULTS Sixteen neurosurgeons, 15 residents, and 84 medical students participated in the study. Neurosurgeon, resident and medical student groups displayed easily distinguishable 3D "force pyramid fingerprints." Neurosurgeons had the lowest force pyramids, indicating application of the lowest forces, followed by resident and medical student groups. Handedness, ergonomics, and visual and haptic tumor characteristics resulted in distinct well-defined 3D force pyramid patterns. CONCLUSIONS Force pyramid fingerprints provide 3D spatial assessment displays of instrument force application during simulated tumor resection. Neurosurgeon force utilization and ergonomic data form a basis for understanding and modulating resident force application and improving patient safety during tumor resection.
NASA Astrophysics Data System (ADS)
Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.
2018-01-01
Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.
Which technology to investigate visual perception in sport: video vs. virtual reality.
Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit
2015-02-01
Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Wendy
The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.
Learning Science Using AR Book: A Preliminary Study on Visual Needs of Deaf Learners
NASA Astrophysics Data System (ADS)
Megat Mohd. Zainuddin, Norziha; Badioze Zaman, Halimah; Ahmad, Azlina
Augmented Reality (AR) is a technology that is projected to have more significant role in teaching and learning, particularly in visualising abstract concepts in the learning process. AR is a technology is based on visually oriented technique. Thus, it is suitable for deaf learners since they are generally classified as visual learners. Realising the importance of visual learning style for deaf learners in learning Science, this paper reports on a preliminary study of on an ongoing research on problems faced by deaf learners in learning the topic on Microorganisms. Being visual learners, they have problems with current text books that are more text-based that graphic based. In this preliminary study, a qualitative approach using the ethnographic observational technique was used so that interaction with three deaf learners who are participants throughout this study (they are also involved actively in the design and development of the AR Book). An interview with their teacher and doctor were also conducted to identify their learning and medical problems respectively. Preliminary findings have confirmed the need to design and develop a special Augmented Reality Book called AR-Science for Deaf Learners (AR-SiD).
3D Visualizations of Abstract DataSets
2010-08-01
contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract
Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic Brain Injury
2013-03-01
be virtual ma eality mode n which the cles (life-siz om the simu ived safe pa rtual reality ixate a cross ppears, move ure and the ntricity offset...AD_________________ Award Number: W81XWH-11-2-0082 TITLE: Rehabilitation of Visual and Perceptual...March 2012 – 28 February 2013 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic
Cassar, Alyssa M; Denyer, Gareth S; O'Connor, Helen T; Gifford, Janelle A
2018-02-23
Nutrition literacy is linked to health via its influence on dietary intake. There is a need for a tool to assess nutrition literacy in research and dietetic practice. We sought guidance from nutrition professionals on topic areas and features of an electronic nutrition literacy assessment tool for Australian adults. 28 experienced nutrition professionals engaged in a range of nutrition and dietetic work areas participated in six focus groups using a semi-structured interview schedule. Data were analysed using an inductive approach using NVivo 10 (QSR International, Pty Ltd., Doncaster, Australia, 2012). Key areas identified to assess nutrition literacy included specific nutrients versus foods, labels and packaging, construction of the diet, knowledge of the Australian Dietary Guidelines and Australian Guide to Healthy Eating, understanding of serve and portion sizes, ability to select healthier foods, and demographics such as belief systems and culture. Exploitation of electronic features to enhance visual and auditory displays, including interactive animations such as "drag and drop" and virtual reality situations, were discussed. This study provided insight into the most relevant topic areas and presentation format to assess the nutrition literacy of adult Australians. The visual, auditory, and interactive capacity of the available technology could enhance the assessment of nutrition literacy.
G2H--graphics-to-haptic virtual environment development tool for PC's.
Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L
2000-01-01
For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Collaborative Embodied Learning in Mixed Reality Motion-Capture Environments: Two Science Studies
ERIC Educational Resources Information Center
Johnson-Glenberg, Mina C.; Birchfield, David A.; Tolentino, Lisa; Koziupa, Tatyana
2014-01-01
These 2 studies investigate the extent to which an Embodied Mixed Reality Learning Environment (EMRELE) can enhance science learning compared to regular classroom instruction. Mixed reality means that physical tangible and digital components were present. The content for the EMRELE required that students map abstract concepts and relations onto…
Social Augmented Reality: Enhancing Context-Dependent Communication and Informal Learning at Work
ERIC Educational Resources Information Center
Pejoska, Jana; Bauters, Merja; Purma, Jukka; Leinonen, Teemu
2016-01-01
Our design proposal of social augmented reality (SoAR) grows from the observed difficulties of practical applications of augmented reality (AR) in workplace learning. In our research we investigated construction workers doing physical work in the field and analyzed the data using qualitative methods in various workshops. The challenges related to…
ERIC Educational Resources Information Center
Yeung, Yau-Yuen
2004-01-01
This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…
ERIC Educational Resources Information Center
Giorgis, Scott; Mahlen, Nancy; Anne, Kirk
2017-01-01
The augmented reality (AR) sandbox bridges the gap between two-dimensional (2D) and three-dimensional (3D) visualization by projecting a digital topographic map onto a sandbox landscape. As the landscape is altered, the map dynamically adjusts, providing an opportunity to discover how to read topographic maps. We tested the hypothesis that the AR…
Advanced Technology for Portable Personal Visualization
1993-01-01
have no cable to drag. " We submitted a short article describing the ceiling tracker and the requirements demanded of trackers in see-through systems...Newspaper/Magazine Articles : "Virtual Reality: It’s All in the Mind," Atlanta Consnrution, 29 September 1992 "Virtual Reality: Exploring the Future...basic scientific investigation of the human haptic system or to serve as haptic interfaces for virtual environments and teleloperation. 2. Research
Augmented Reality Comes to Physics
NASA Astrophysics Data System (ADS)
Buesing, Mark; Cook, Michael
2013-04-01
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as Tagwhat and Star Chart (a must for astronomy class). The yellow line marking first downs in a televised football game2 and the enhanced puck that makes televised hockey easier to follow3 both use augmented reality to do the job.
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Direct Manipulation in Virtual Reality
NASA Technical Reports Server (NTRS)
Bryson, Steve
2003-01-01
Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.
Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder
NASA Technical Reports Server (NTRS)
Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.
1999-01-01
We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.
Advancing Creative Visual Thinking with Constructive Function-Based Modelling
ERIC Educational Resources Information Center
Pasko, Alexander; Adzhiev, Valery; Malikova, Evgeniya; Pilyugin, Victor
2013-01-01
Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an…
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
Augmented Reality as a Telemedicine Platform for Remote Procedural Training.
Wang, Shiyao; Parsons, Michael; Stone-McLean, Jordan; Rogers, Peter; Boyd, Sarah; Hoover, Kristopher; Meruvia-Pastor, Oscar; Gong, Minglun; Smith, Andrew
2017-10-10
Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR) system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS) without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER) through mixed reality capture (MRC) and serves as a novel telemedicine platform with remote pointing capabilities. The mentor's hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers' perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform.
Augmented Reality as a Telemedicine Platform for Remote Procedural Training
Wang, Shiyao; Parsons, Michael; Stone-McLean, Jordan; Rogers, Peter; Boyd, Sarah; Hoover, Kristopher; Meruvia-Pastor, Oscar; Gong, Minglun; Smith, Andrew
2017-01-01
Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR) system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS) without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER) through mixed reality capture (MRC) and serves as a novel telemedicine platform with remote pointing capabilities. The mentor’s hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers’ perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform. PMID:28994720
NASA Technical Reports Server (NTRS)
Brooks, Frederick P., Jr.
1991-01-01
The utility of virtual reality computer graphics in telepresence applications is not hard to grasp and promises to be great. When the virtual world is entirely synthetic, as opposed to real but remote, the utility is harder to establish. Vehicle simulators for aircraft, vessels, and motor vehicles are proving their worth every day. Entertainment applications such as Disney World's StarTours are technologically elegant, good fun, and economically viable. Nevertheless, some of us have no real desire to spend our lifework serving the entertainment craze of our sick culture; we want to see this exciting technology put to work in medicine and science. The topics covered include the following: testing a force display for scientific visualization -- molecular docking; and testing a head-mounted display for scientific and medical visualization.
Augmented Reality M-Learning to Enhance Nursing Skills Acquisition in the Clinical Skills Laboratory
ERIC Educational Resources Information Center
Garrett, Bernard M.; Jackson, Cathryn; Wilson, Brian
2015-01-01
Purpose: This paper aims to report on a pilot research project designed to explore if new mobile augmented reality (AR) technologies have the potential to enhance the learning of clinical skills in the lab. Design/methodology/approach: An exploratory action-research-based pilot study was undertaken to explore an initial proof-of-concept design in…
Enhancing The Army Operations Process Through The Incorportation of Holography
2017-06-09
the process and gives the user the sense of a noninvasive enhancement to quickly make decisions . Processes and information no longer create...mentally overlaying it onto the process . Data now augments reality and is a noninvasive process to decision making . v ACKNOWLEDGMENTS This paper...environment, augmented on top of reality decreases the amount of time needed to make decisions
Zarka, David; Cevallos, Carlos; Petieau, Mathieu; Hoellinger, Thomas; Dan, Bernard; Cheron, Guy
2014-01-01
Biological motion observation has been recognized to produce dynamic change in sensorimotor activation according to the observed kinematics. Physical plausibility of the spatial-kinematic relationship of human movement may play a major role in the top-down processing of human motion recognition. Here, we investigated the time course of scalp activation during observation of human gait in order to extract and use it on future integrated brain-computer interface using virtual reality (VR). We analyzed event related potentials (ERP), the event related spectral perturbation (ERSP) and the inter-trial coherence (ITC) from high-density EEG recording during video display onset (−200–600 ms) and the steady state visual evoked potentials (SSVEP) inside the video of human walking 3D-animation in three conditions: Normal; Upside-down (inverted images); and Uncoordinated (pseudo-randomly mixed images). We found that early visual evoked response P120 was decreased in Upside-down condition. The N170 and P300b amplitudes were decreased in Uncoordinated condition. In Upside-down and Uncoordinated conditions, we found decreased alpha power and theta phase-locking. As regards gamma oscillation, power was increased during the Upside-down animation and decreased during the Uncoordinated animation. An SSVEP-like response oscillating at about 10 Hz was also described showing that the oscillating pattern is enhanced 300 ms after the heel strike event only in the Normal but not in the Upside-down condition. Our results are consistent with most of previous point-light display studies, further supporting possible use of virtual reality for neurofeedback applications. PMID:25278847
Erikson, Li; Barnard, Patrick; O'Neill, Andrea; Wood, Nathan J.; Jones, Jeanne M.; Finzi Hart, Juliette; Vitousek, Sean; Limber, Patrick; Hayden, Maya; Fitzgibbon, Michael; Lovering, Jessica; Foxgrover, Amy C.
2018-01-01
This paper is the second of two that describes the Coastal Storm Modeling System (CoSMoS) approach for quantifying physical hazards and socio-economic hazard exposure in coastal zones affected by sea-level rise and changing coastal storms. The modelling approach, presented in Part 1, downscales atmospheric global-scale projections to local scale coastal flood impacts by deterministically computing the combined hazards of sea-level rise, waves, storm surges, astronomic tides, fluvial discharges, and changes in shoreline positions. The method is demonstrated through an application to Southern California, United States, where the shoreline is a mix of bluffs, beaches, highly managed coastal communities, and infrastructure of high economic value. Results show that inclusion of 100-year projected coastal storms will increase flooding by 9–350% (an additional average 53.0 ± 16.0 km2) in addition to a 25–500 cm sea-level rise. The greater flooding extents translate to a 55–110% increase in residential impact and a 40–90% increase in building replacement costs. To communicate hazards and ranges in socio-economic exposures to these hazards, a set of tools were collaboratively designed and tested with stakeholders and policy makers; these tools consist of two web-based mapping and analytic applications as well as virtual reality visualizations. To reach a larger audience and enhance usability of the data, outreach and engagement included workshop-style trainings for targeted end-users and innovative applications of the virtual reality visualizations.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Anil, S M; Kato, Y; Hayakawa, M; Yoshida, K; Nagahisha, S; Kanno, T
2007-04-01
Advances in computer imaging and technology have facilitated enhancement in surgical planning with a 3-dimensional model of the surgical plan of action utilizing advanced visualization tools in order to plan individual interactive operations with the aid of the dextroscope. This provides a proper 3-dimensional imaging insight to the pathological anatomy and sets a new dimension in collaboration for training and education. The case of a seventeen-year-old female, being operated with the aid of a preoperative 3-dimensional virtual reality planning and the practical application of the neurosurgical operation, is presented. This young lady presented with a two-year history of recurrent episodes of severe, global, throbbing headache with episodes of projectile vomiting associated with shoulder pain which progressively worsened. She had no obvious neurological deficits on clinical examination. CT and MRI showed a contrast-enhancing midline posterior fossa space-occupying lesion. Utilizing virtual imaging technology with the aid of a dextroscope which generates stereoscopic images, a 3-dimensional image was produced with the CT and MRI images. A preoperative planning for excision of the lesion was made and a real-time 3-dimensional volume was produced and surgical planning with the dextroscope was made and the lesion excised. Virtual reality has brought new proportions in 3-dimensional planning and management of various complex neuroanatomical problems that are faced during various operations. Integration of 3-dimensional imaging with stereoscopic vision makes understanding the complex anatomy easier and helps improve decision making in patient management.
2006-06-01
allowing substantial see-around capability. Regions of visual suppression due to binocular rivalry ( luning ) are shown along the shaded flanks of...that the visual suppression of binocular rivalry, luning , (Velger, 1998, p.56-58) associated with the partial overlap conditions did not materially...tags were displayed. Thus, the frequency of conflicting binocular contours was reduced. In any case, luning does not seem to introduce major
HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization
2013-01-01
user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,
Using an Augmented Reality Device as a Distance-based Vision Aid-Promise and Limitations.
Kinateder, Max; Gualtieri, Justin; Dunn, Matt J; Jarosz, Wojciech; Yang, Xing-Dong; Cooper, Emily A
2018-06-06
For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
A telescope with augmented reality functions
NASA Astrophysics Data System (ADS)
Hou, Qichao; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian
2016-10-01
This study introduces a telescope with virtual reality (VR) and augmented reality (AR) functions. In this telescope, information on the micro-display screen is integrated to the reticule of telescope through a beam splitter and is then received by the observer. The design and analysis of telescope optical system with AR and VR ability is accomplished and the opto-mechanical structure is designed. Finally, a proof-of-concept prototype is fabricated and demonstrated. The telescope has an exit pupil diameter of 6 mm at an eye relief of 19 mm, 6° field of view, 5 to 8 times visual magnification , and a 30° field of view of the virtual image.
Technical note: real-time web-based wireless visual guidance system for radiotherapy.
Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho
2017-06-01
Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.
[What do virtual reality tools bring to child and adolescent psychiatry?
Bioulac, S; de Sevin, E; Sagaspe, P; Claret, A; Philip, P; Micoulaud-Franchi, J A; Bouvard, M P
2018-06-01
Virtual reality is a relatively new technology that enables individuals to immerse themselves in a virtual world. It offers several advantages including a more realistic, lifelike environment that may allow subjects to "forget" they are being assessed, allow a better participation and an increased generalization of learning. Moreover, the virtual reality system can provide multimodal stimuli, such as visual and auditory stimuli, and can also be used to evaluate the patient's multimodal integration and to aid rehabilitation of cognitive abilities. The use of virtual reality to treat various psychiatric disorders in adults (phobic anxiety disorders, post-traumatic stress disorder, eating disorders, addictions…) and its efficacy is supported by numerous studies. Similar research for children and adolescents is lagging behind. This may be particularly beneficial to children who often show great interest and considerable success on computer, console or videogame tasks. This article will expose the main studies that have used virtual reality with children and adolescents suffering from psychiatric disorders. The use of virtual reality to treat anxiety disorders in adults is gaining popularity and its efficacy is supported by various studies. Most of the studies attest to the significant efficacy of the virtual reality exposure therapy (or in virtuo exposure). In children, studies have covered arachnophobia social anxiety and school refusal phobia. Despite the limited number of studies, results are very encouraging for treatment in anxiety disorders. Several studies have reported the clinical use of virtual reality technology for children and adolescents with autistic spectrum disorders (ASD). Extensive research has proven the efficiency of technologies as support tools for therapy. Researches are found to be focused on communication and on learning and social imitation skills. Virtual reality is also well accepted by subjects with ASD. The virtual environment offers the opportunity to administer controlled tasks such as the typical neuropsychological tools, but in an environment much more like a standard classroom. The virtual reality classroom offers several advantages compared to classical tools such as more realistic and lifelike environment but also records various measures in standardized conditions. Most of the studies using a virtual classroom have found that children with Attention Deficit/Hyperactivity Disorder make significantly fewer correct hits and more commission errors compared with controls. The virtual classroom has proven to be a good clinical tool for evaluation of attention in ADHD. For eating disorders, cognitive behavioural therapy (CBT) program enhanced by a body image specific component using virtual reality techniques was shown to be more efficient than cognitive behavioural therapy alone. The body image-specific component using virtual reality techniques boots efficiency and accelerates the CBT change process for eating disorders. Virtual reality is a relatively new technology and its application in child and adolescent psychiatry is recent. However, this technique is still in its infancy and much work is needed including controlled trials before it can be introduced in routine clinical use. Virtual reality interventions should also investigate how newly acquired skills are transferred to the real world. At present virtual reality can be considered a useful tool in evaluation and treatment for child and adolescent disorders. Copyright © 2017 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Creating Physical 3D Stereolithograph Models of Brain and Skull
Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.
2007-01-01
The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879
"What Does This Picture Say?" Reading the Intertextuality of Visual Images
ERIC Educational Resources Information Center
Werner, Walter
2004-01-01
Our social worlds are visually saturated. A feature of post-modern society is its relentless traffic in images, often borrowed from diverse times and places, and patched together in ever changing ways. This traffic serves commercial purposes, shapes identities, and increasingly stands in for reality itself. As a newspaper columnist noted, "most of…
ERIC Educational Resources Information Center
Greffou, Selma; Bertone, Armando; Hahler, Eva-Maria; Hanssens, Jean-Marie; Mottron, Laurent; Faubert, Jocelyn
2012-01-01
Although atypical motor behaviors have been associated with autism, investigations regarding their possible origins are scarce. This study assessed the visual and vestibular components involved in atypical postural reactivity in autism. Postural reactivity and stability were measured for younger (12-15 years) and older (16-33 years) autistic…
Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning
ERIC Educational Resources Information Center
Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan
2009-01-01
Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
Coaching Method in Teaching History of Visual Arts to Students
ERIC Educational Resources Information Center
Faizrakhmanova, Aigul; Averianova, Tatiana; Aitov, Valerie; Kudinova, Gulnara; Lebedeva, Inessa
2018-01-01
Coaching method is used in sports, business, psychology, and economics as a method to increase performance. The great potential of coaching also expands its application in education, namely in teaching History of Visual Arts. The author identifies the basic stages of coaching: goal setting; reality check; courses of action and will to act. The…
3D Flow visualization in virtual reality
NASA Astrophysics Data System (ADS)
Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa
2017-11-01
By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.
3D animation model with augmented reality for natural science learning in elementary school
NASA Astrophysics Data System (ADS)
Hendajani, F.; Hakim, A.; Lusita, M. D.; Saputra, G. E.; Ramadhana, A. P.
2018-05-01
Many opinions from primary school students' on Natural Science are a difficult lesson. Many subjects are not easily understood by students, especially on materials that teach some theories about natural processes. Such as rain process, condensation and many other processes. The difficulty that students experience in understanding it is that students cannot imagine the things that have been taught in the material. Although there is material to practice some theories but is actually quite limited. There is also a video or simulation material in the form of 2D animated images. Understanding concepts in natural science lessons are also poorly understood by students. Natural Science learning media uses 3-dimensional animation models (3D) with augmented reality technology, which offers some visualization of science lessons. This application was created to visualize a process in Natural Science subject matter. The hope of making this application is to improve student's concept. This app is made to run on a personal computer that comes with a webcam with augmented reality. The app will display a 3D animation if the camera can recognize the marker.
Baker, Benjamin; Amin, Kavit; Chan, Adrian; Patel, Ketan; Wong, Jason
2016-01-01
The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena. PMID:28090510
Khor, Wee Sim; Baker, Benjamin; Amin, Kavit; Chan, Adrian; Patel, Ketan; Wong, Jason
2016-12-01
The continuing enhancement of the surgical environment in the digital age has led to a number of innovations being highlighted as potential disruptive technologies in the surgical workplace. Augmented reality (AR) and virtual reality (VR) are rapidly becoming increasingly available, accessible and importantly affordable, hence their application into healthcare to enhance the medical use of data is certain. Whether it relates to anatomy, intraoperative surgery, or post-operative rehabilitation, applications are already being investigated for their role in the surgeons armamentarium. Here we provide an introduction to the technology and the potential areas of development in the surgical arena.
Visual Stability of Objects and Environments Viewed through Head-Mounted Displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Adelstein, Bernard D.
2015-01-01
Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.
Virtual reality training and assessment in laparoscopic rectum surgery.
Pan, Jun J; Chang, Jian; Yang, Xiaosong; Liang, Hui; Zhang, Jian J; Qureshi, Tahseen; Howell, Robert; Hickish, Tamas
2015-06-01
Virtual-reality (VR) based simulation techniques offer an efficient and low cost alternative to conventional surgery training. This article describes a VR training and assessment system in laparoscopic rectum surgery. To give a realistic visual performance of interaction between membrane tissue and surgery tools, a generalized cylinder based collision detection and a multi-layer mass-spring model are presented. A dynamic assessment model is also designed for hierarchy training evaluation. With this simulator, trainees can operate on the virtual rectum with both visual and haptic sensation feedback simultaneously. The system also offers surgeons instructions in real time when improper manipulation happens. The simulator has been tested and evaluated by ten subjects. This prototype system has been verified by colorectal surgeons through a pilot study. They believe the visual performance and the tactile feedback are realistic. It exhibits the potential to effectively improve the surgical skills of trainee surgeons and significantly shorten their learning curve. Copyright © 2014 John Wiley & Sons, Ltd.
Enabling scientific workflows in virtual reality
Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.
2006-01-01
To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.
Learning prosthetic vision: a virtual-reality study.
Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J
2005-09-01
Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.
Decision exploration lab: a visual analytics solution for decision management.
Broeksema, Bertjan; Baudel, Thomas; Telea, Arthur G; Crisafulli, Paolo
2013-12-01
We present a visual analytics solution designed to address prevalent issues in the area of Operational Decision Management (ODM). In ODM, which has its roots in Artificial Intelligence (Expert Systems) and Management Science, it is increasingly important to align business decisions with business goals. In our work, we consider decision models (executable models of the business domain) as ontologies that describe the business domain, and production rules that describe the business logic of decisions to be made over this ontology. Executing a decision model produces an accumulation of decisions made over time for individual cases. We are interested, first, to get insight in the decision logic and the accumulated facts by themselves. Secondly and more importantly, we want to see how the accumulated facts reveal potential divergences between the reality as captured by the decision model, and the reality as captured by the executed decisions. We illustrate the motivation, added value for visual analytics, and our proposed solution and tooling through a business case from the car insurance industry.
Mobile device geo-localization and object visualization in sensor networks
NASA Astrophysics Data System (ADS)
Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael
2014-10-01
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
VRML metabolic network visualizer.
Rojdestvenski, Igor
2003-03-01
A successful date collection visualization should satisfy a set of many requirements: unification of diverse data formats, support for serendipity research, support of hierarchical structures, algorithmizability, vast information density, Internet-readiness, and other. Recently, virtual reality has made significant progress in engineering, architectural design, entertainment and communication. We experiment with the possibility of using the immersive abstract three-dimensional visualizations of the metabolic networks. We present the trial Metabolic Network Visualizer software, which produces graphical representation of a metabolic network as a VRML world from a formal description written in a simple SGML-type scripting language.
Shin, Ji-Won; Song, Gui-Bin; Hwangbo, Gak
2015-07-01
[Purpose] The purpose of the study was to evaluate the effects of conventional neurological treatment and a virtual reality training program on eye-hand coordination in children with cerebral palsy. [Subjects] Sixteen children (9 males, 7 females) with spastic diplegic cerebral palsy were recruited and randomly assigned to the conventional neurological physical therapy group (CG) and virtual reality training group (VRG). [Methods] Eight children in the control group performed 45 minutes of therapeutic exercise twice a week for eight weeks. In the experimental group, the other eight children performed 30 minutes of therapeutic exercise and 15 minutes of a training program using virtual reality twice a week during the experimental period. [Results] After eight weeks of the training program, there were significant differences in eye-hand coordination and visual motor speed in the comparison of the virtual reality training group with the conventional neurological physical therapy group. [Conclusion] We conclude that a well-designed training program using virtual reality can improve eye-hand coordination in children with cerebral palsy.
Meldrum, Dara; Herdman, Susan; Moloney, Roisin; Murray, Deirdre; Duffy, Douglas; Malone, Kareena; French, Helen; Hone, Stephen; Conroy, Ronan; McConn-Walsh, Rory
2012-03-26
Unilateral peripheral vestibular loss results in gait and balance impairment, dizziness and oscillopsia. Vestibular rehabilitation benefits patients but optimal treatment remains unknown. Virtual reality is an emerging tool in rehabilitation and provides opportunities to improve both outcomes and patient satisfaction with treatment. The Nintendo Wii Fit Plus® (NWFP) is a low cost virtual reality system that challenges balance and provides visual and auditory feedback. It may augment the motor learning that is required to improve balance and gait, but no trials to date have investigated efficacy. In a single (assessor) blind, two centre randomised controlled superiority trial, 80 patients with unilateral peripheral vestibular loss will be randomised to either conventional or virtual reality based (NWFP) vestibular rehabilitation for 6 weeks. The primary outcome measure is gait speed (measured with three dimensional gait analysis). Secondary outcomes include computerised posturography, dynamic visual acuity, and validated questionnaires on dizziness, confidence and anxiety/depression. Outcome will be assessed post treatment (8 weeks) and at 6 months. Advances in the gaming industry have allowed mass production of highly sophisticated low cost virtual reality systems that incorporate technology previously not accessible to most therapists and patients. Importantly, they are not confined to rehabilitation departments, can be used at home and provide an accurate record of adherence to exercise. The benefits of providing augmented feedback, increasing intensity of exercise and accurately measuring adherence may improve conventional vestibular rehabilitation but efficacy must first be demonstrated. Clinical trials.gov identifier: NCT01442623.
Guo, Chunlan; Deng, Hongyan; Yang, Jian
2015-01-01
To assess the effect of virtual reality distraction on pain among patients with a hand injury undergoing a dressing change. Virtual reality distraction can effectively alleviate pain among patients undergoing a dressing change. Clinical research has not addressed pain control during a dressing change. A randomised controlled trial was performed. In the first dressing change sequence, 98 patients were randomly divided into an experimental group and a control group, with 49 cases in each group. Pain levels were compared between the two groups before and after the dressing change using a visual analog scale. The sense of involvement in virtual environments was measured using the Pearson correlation coefficient analysis, which determined the relationship between the sense of involvement and pain level. The difference in visual analog scale scores between the two groups before the dressing change was not statistically significant (t = 0·196, p > 0·05), but the scores became statistically significant after the dressing change (t = -30·792, p < 0·01). The correlation between the sense of involvement in a virtual environment and pain level during the dressing was statistically significant (R(2) = 0·5538, p < 0·05). Virtual reality distraction can effectively alleviate pain among patients with a hand injury undergoing a dressing change. Better results can be obtained by increasing the sense of involvement in a virtual environment. Virtual reality distraction can effectively relieve pain without side effects and is not reliant on a doctor's prescription. This tool is convenient for nurses to use, especially when analgesics are unavailable. © 2014 John Wiley & Sons Ltd.
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.
Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J
2011-11-01
To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.
Gunner Goggles: Implementing Augmented Reality into Medical Education.
Wang, Leo L; Wu, Hao-Hua; Bilici, Nadir; Tenney-Soeiro, Rebecca
2016-01-01
There is evidence that both smartphone and tablet integration into medical education has been lacking. At the same time, there is a niche for augmented reality (AR) to improve this process through the enhancement of textbook learning. Gunner Goggles is an attempt to enhance textbook learning in shelf exam preparatory review with augmented reality. Here we describe our initial prototype and detail the process by which augmented reality was implemented into our textbook through Layar. We describe the unique functionalities of our textbook pages upon augmented reality implementation, which includes links, videos and 3D figures, and surveyed 24 third year medical students for their impression of the technology. Upon demonstrating an initial prototype textbook chapter, 100% (24/24) of students felt that augmented reality improved the quality of our textbook chapter as a learning tool. Of these students, 92% (22/24) agreed that their shelf exam review was inadequate and 19/24 (79%) felt that a completed Gunner Goggles product would have been a viable alternative to their shelf exam review. Thus, while students report interest in the integration of AR into medical education test prep, future investigation into how the use of AR can improve performance on exams is warranted.
Stephen, Ian D; Sturman, Daniel; Stevenson, Richard J; Mond, Jonathan; Brooks, Kevin R
2018-01-01
Body size misperception-the belief that one is larger or smaller than reality-affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers' level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception-a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse-while others do not.
The nature of the (visualization) game: Challenges and opportunities from computational geophysics
NASA Astrophysics Data System (ADS)
Kellogg, L. H.
2016-12-01
As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.
Building intuitive 3D interfaces for virtual reality systems
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh
2007-03-01
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.
Brain-computer interface: changes in performance using virtual reality techniques.
Ron-Angevin, Ricardo; Díaz-Estrella, Antonio
2009-01-09
The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.
Behavioral Implications of Mental World-Making.
ERIC Educational Resources Information Center
King, James Roy
1989-01-01
The ways in which the mind creates new worlds, new contexts, and possible alternate realities are described. It is proposed that with this information, facility in devising new worlds and realities may be enhanced and imaginative reach expanded. (MSE)
Vids: Version 2.0 Alpha Visualization Engine
2018-04-25
fidelity than existing efforts. Vids is a project aimed at producing more dynamic and interactive visualization tools using modern computer game ...move through and interact with the data to improve informational understanding. The Vids software leverages off-the-shelf modern game development...analysis and correlations. Recently, an ARL-pioneered project named Virtual Reality Data Analysis Environment (VRDAE) used VR and a modern game engine
Using Augmented Reality and Virtual Environments in Historic Places to Scaffold Historical Empathy
ERIC Educational Resources Information Center
Sweeney, Sara K.; Newbill, Phyllis; Ogle, Todd; Terry, Krista
2018-01-01
The authors explore how 3D visualizations of historical sites can be used as pedagogical tools to support historical empathy. They provide three visualizations created by a team at Virginia Tech as examples. They discuss virtual environments and how the digital restoration process is applied. They also define historical empathy, explain why it is…
ERIC Educational Resources Information Center
Sweeny, Robert W.
2008-01-01
Many challenges currently face art educators who aim to address aspects of popular visual culture in the art classroom. This article analyzes the relationship between performance art and the MTV program "Jackass," one example of problematic popular visual culture. Issues of gender representation and violence within the context of Reality TV and…
ERIC Educational Resources Information Center
Lorenzo, Gonzalo; Pomares, Jorge; Lledo, Asuncion
2013-01-01
This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality…
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning
2009-09-01
To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.
Ray-based approach to integrated 3D visual communication
NASA Astrophysics Data System (ADS)
Naemura, Takeshi; Harashima, Hiroshi
2001-02-01
For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.
Kramers, Matthew; Armstrong, Ryan; Bakhshmand, Saeed M; Fenster, Aaron; de Ribaupierre, Sandrine; Eagleson, Roy
2014-01-01
Image guidance can provide surgeons with valuable contextual information during a medical intervention. Often, image guidance systems require considerable infrastructure, setup-time, and operator experience to be utilized. Certain procedures performed at bedside are susceptible to navigational errors that can lead to complications. We present an application for mobile devices that can provide image guidance using augmented reality to assist in performing neurosurgical tasks. A methodology is outlined that evaluates this mode of visualization from the standpoint of perceptual localization, depth estimation, and pointing performance, in scenarios derived from a neurosurgical targeting task. By measuring user variability and speed we can report objective metrics of performance for our augmented reality guidance system.
NASA Astrophysics Data System (ADS)
Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław
2017-08-01
Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
ERIC Educational Resources Information Center
Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.
2012-01-01
We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…
Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI
2015-04-01
virtual reality driving simulator data acquisition. Data collection for the pilot study is nearly complete and data analyses are currently under way...Training for primary study procedures including neuropsychological testing, eye- tracking, virtual reality driving simulator, and EEG data acquisition is...the virtual reality driving simulator. Participants are instructed to drive along a coastal highway while performing the target detection task
Liu, Wen P; Azizian, Mahdi; Sorger, Jonathan; Taylor, Russell H; Reilly, Brian K; Cleary, Kevin; Preciado, Diego
2014-03-01
To our knowledge, this is the first reported cadaveric feasibility study of a master-slave-assisted cochlear implant procedure in the otolaryngology-head and neck surgery field using the da Vinci Si system (da Vinci Surgical System; Intuitive Surgical, Inc). We describe the surgical workflow adaptations using a minimally invasive system and image guidance integrating intraoperative cone beam computed tomography through augmented reality. To test the feasibility of da Vinci Si-assisted cochlear implant surgery with augmented reality, with visualization of critical structures and facilitation with precise cochleostomy for electrode insertion. Cadaveric case study of bilateral cochlear implant approaches conducted at Intuitive Surgical Inc, Sunnyvale, California. Bilateral cadaveric mastoidectomies, posterior tympanostomies, and cochleostomies were performed using the da Vinci Si system on a single adult human donor cadaveric specimen. Radiographic confirmation of successful cochleostomies, placement of a phantom cochlear implant wire, and visual confirmation of critical anatomic structures (facial nerve, cochlea, and round window) in augmented stereoendoscopy. With a surgical mean time of 160 minutes per side, complete bilateral cochlear implant procedures were successfully performed with no violation of critical structures, notably the facial nerve, chorda tympani, sigmoid sinus, dura, or ossicles. Augmented reality image overlay of the facial nerve, round window position, and basal turn of the cochlea was precise. Postoperative cone beam computed tomography scans confirmed successful placement of the phantom implant electrode array into the basal turn of the cochlea. To our knowledge, this is the first study in the otolaryngology-head and neck surgery literature examining the use of master-slave-assisted cochleostomy with augmented reality for cochlear implants using the da Vinci Si system. The described system for cochleostomy has the potential to improve the surgeon's confidence, as well as surgical safety, efficiency, and precision by filtering tremor. The integration of augmented reality may be valuable for surgeons dealing with complex cases of congenital anatomic abnormality, for revision cochlear implant with distorted anatomy and poorly pneumatized mastoids, and as a method of interactive teaching. Further research into the cost-benefit ratio of da Vinci Si-assisted otologic surgery, as well as refinements of the proposed workflow, are required before considering clinical studies.
Spacecraft Guidance, Navigation, and Control Visualization Tool
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
G-View is a 3D visualization tool for supporting spacecraft guidance, navigation, and control (GN&C) simulations relevant to small-body exploration and sampling (see figure). The tool is developed in MATLAB using Virtual Reality Toolbox and provides users with the ability to visualize the behavior of their simulations, regardless of which programming language (or machine) is used to generate simulation results. The only requirement is that multi-body simulation data is generated and placed in the proper format before applying G-View.
Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M
2016-01-26
Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Gibby, Jacob T; Swenson, Samuel A; Cvetko, Steve; Rao, Raj; Javan, Ramin
2018-06-22
Augmented reality has potential to enhance surgical navigation and visualization. We determined whether head-mounted display augmented reality (HMD-AR) with superimposed computed tomography (CT) data could allow the wearer to percutaneously guide pedicle screw placement in an opaque lumbar model with no real-time fluoroscopic guidance. CT imaging was obtained of a phantom composed of L1-L3 Sawbones vertebrae in opaque silicone. Preprocedural planning was performed by creating virtual trajectories of appropriate angle and depth for ideal approach into the pedicle, and these data were integrated into the Microsoft HoloLens using the Novarad OpenSight application allowing the user to view the virtual trajectory guides and CT images superimposed on the phantom in two and three dimensions. Spinal needles were inserted following the virtual trajectories to the point of contact with bone. Repeat CT revealed actual needle trajectory, allowing comparison with the ideal preprocedural paths. Registration of AR to phantom showed a roughly circular deviation with maximum average radius of 2.5 mm. Users took an average of 200 s to place a needle. Extrapolation of needle trajectory into the pedicle showed that of 36 needles placed, 35 (97%) would have remained within the pedicles. Needles placed approximated a mean distance of 4.69 mm in the mediolateral direction and 4.48 mm in the craniocaudal direction from pedicle bone edge. To our knowledge, this is the first peer-reviewed report and evaluation of HMD-AR with superimposed 3D guidance utilizing CT for spinal pedicle guide placement for the purpose of cannulation without the use of fluoroscopy.
Enhancing Exposure Therapy for PTSD: Virtual Reality and Imaginal Exposure with a Cognitive Enhancer
2016-10-01
the author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other...x 2 design will allow several important comparisons to be made: a direct comparison of the efficacy of virtual reality versus imaginal exposure...College; Cypress College; Otis College of Art and Design ; Golden West College; Los Angeles Southwestern College; Art Institute of California; Antioch
Chow, Joyce A.; Törnros, Martin E.; Waltersson, Marie; Richard, Helen; Kusoffsky, Madeleine; Lundström, Claes F.; Kurti, Arianit
2017-01-01
Context: Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Aims: Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. Settings and Design: The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Subjects and Methods: Our research institute focused on an experimental and “designerly” approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Statistical Analysis Used: Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as “rapid ethnography” and “conversation with materials”. Results: We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the image, and create linked information. Conclusions: The augmented reality magnetically tracked scalpel reduces tool switching though limitations in today's augmented reality technology fall short of creating an ideal immersive workflow by requiring the use of a monitor. While this technology catches up, we recommend focusing efforts on enabling the easy creation of layered, complex reports, linking, and viewing information across systems. Reflecting upon our results, we argue for digitalization to focus not only on how to record increasing amounts of data but also how these data can be accessed in a more thoughtful way that draws upon the expertise and creativity of pathology professionals using the systems. PMID:28966831
Chow, Joyce A; Törnros, Martin E; Waltersson, Marie; Richard, Helen; Kusoffsky, Madeleine; Lundström, Claes F; Kurti, Arianit
2017-01-01
Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. Our research institute focused on an experimental and "designerly" approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as " rapid ethnography " and " conversation with materials ". We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the image, and create linked information. The augmented reality magnetically tracked scalpel reduces tool switching though limitations in today's augmented reality technology fall short of creating an ideal immersive workflow by requiring the use of a monitor. While this technology catches up, we recommend focusing efforts on enabling the easy creation of layered, complex reports, linking, and viewing information across systems. Reflecting upon our results, we argue for digitalization to focus not only on how to record increasing amounts of data but also how these data can be accessed in a more thoughtful way that draws upon the expertise and creativity of pathology professionals using the systems.
Salvadori, Andrea; Del Frate, Gianluca; Pagliai, Marco; Mancini, Giordano; Barone, Vincenzo
2016-11-15
The role of Virtual Reality (VR) tools in molecular sciences is analyzed in this contribution through the presentation of the Caffeine software to the quantum chemistry community. Caffeine, developed at Scuola Normale Superiore, is specifically tailored for molecular representation and data visualization with VR systems, such as VR theaters and helmets. Usefulness and advantages that can be gained by exploiting VR are here reported, considering few examples specifically selected to illustrate different level of theory and molecular representation.
How to reduce workload--augmented reality to ease the work of air traffic controllers.
Hofmann, Thomas; König, Christina; Bruder, Ralph; Bergner, Jörg
2012-01-01
In the future the air traffic will rise--the workload of the controllers will do the same. In the BMWi research project, one of the tasks is, how to ensure safe air traffic, and a reasonable workload for the air traffic controllers. In this project it was the goal to find ways how to reduce the workload (and stress) for the controllers to allow safe air traffic, esp. at huge hub-airports by implementing augmented reality visualization and interaction.
Simulation of eye disease in virtual reality.
Jin, Bei; Ai, Zhuming; Rasmussen, Mary
2005-01-01
It is difficult to understand verbal descriptions of visual phenomenon if one has no such experience. Virtual Reality offers a unique opportunity to "experience" diminished vision and the problems it causes in daily life. We have developed an application to simulate age-related macular degeneration, glaucoma, protanopia, and diabetic retinopathy in a familiar setting. The application also includes the introduction of eye anatomy representing both normal and pathologic states. It is designed for patient education, health care practitioner training, and eye care specialist education.
Virtual reality: teaching tool of the twenty-first century?
Hoffman, H; Vu, D
1997-12-01
Virtual reality (VR) is gaining recognition for its enormous educational potential. While not yet in the mainstream of academic medical training, many prototype and first-generation VR applications are emerging, with target audiences ranging from first- and second-year medical students to residents in advanced clinical training. Visualization tools that take advantage of VR technologies are being designed to provide engaging and intuitive environments for learning visually and spatially complex topics such as human anatomy, biochemistry, and molecular biology. These applications present dynamic, three-dimensional views of structures and their spatial relationships, enabling users to move beyond "real-world" experiences by interacting with or altering virtual objects in ways that would otherwise be difficult or impossible. VR-based procedural and surgical simulations, often compared with flight simulators in aviation, hold significant promise for revolutionizing medical training. Already a wide range of simulations, representing diverse content areas and utilizing a variety of implementation strategies, are either under development or in their early implementation stages. These new systems promise to make broad-based training experiences available for students at all levels, without the risks and ethical concerns typically associated with using animal and human subjects. Medical students could acquire proficiency and gain confidence in the ability to perform a wide variety of techniques long before they need to use them clinically. Surgical residents could rehearse and refine operative procedures, using an unlimited pool of virtual patients manifesting a wide range of anatomic variations, traumatic wounds, and disease states. Those simulated encounters, in combination with existing opportunities to work with real patients, could increase the depth and breadth of learners' exposure to medical problems, ensure uniformity of training experiences, and enhance the acquisition of clinical skills.
Automatic visualization of 3D geometry contained in online databases
NASA Astrophysics Data System (ADS)
Zhang, Jie; John, Nigel W.
2003-04-01
In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.
E-virtual reality exposure therapy in acrophobia: A pilot study.
Levy, Fanny; Leboucher, Pierre; Rautureau, Gilles; Jouvent, Roland
2016-06-01
Virtual reality therapy is already used for anxiety disorders as an alternative to in vivo and in imagino exposure. To our knowledge, however, no one has yet proposed using remote virtual reality (e-virtual reality). The aim of the present study was to assess e-virtual reality in an acrophobic population. Six individuals with acrophobia each underwent six sessions (two sessions per week) of virtual reality exposure therapy. The first three were remote sessions, while the last three were traditional sessions in the physical presence of the therapist. Anxiety (STAI form Y-A, visual analog scale, heart rate), presence, technical difficulties and therapeutic alliance (Working Alliance Inventory) were measured. In order to control the conditions in which these measures were made, all the sessions were conducted in hospital. None of the participants dropped out. The remote sessions were well accepted. None of the participants verbalized reluctance. No major technical problems were reported. None of the sessions were cancelled or interrupted because of software incidents. Measures (anxiety, presence, therapeutic alliance) were comparable across the two conditions. e-Virtual reality can therefore be used to treat acrophobic disorders. However, control studies are needed to assess online feasibility, therapeutic effects and the mechanisms behind online presence. © The Author(s) 2015.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
Virtual Reality and Simulation in Neurosurgical Training.
Bernardo, Antonio
2017-10-01
Recent biotechnological advances, including three-dimensional microscopy and endoscopy, virtual reality, surgical simulation, surgical robotics, and advanced neuroimaging, have continued to mold the surgeon-computer relationship. For developing neurosurgeons, such tools can reduce the learning curve, improve conceptual understanding of complex anatomy, and enhance visuospatial skills. We explore the current and future roles and application of virtual reality and simulation in neurosurgical training. Copyright © 2017 Elsevier Inc. All rights reserved.
Telemanipulation, telepresence, and virtual reality for surgery in the year 2000
NASA Astrophysics Data System (ADS)
Satava, Richard M.
1995-12-01
The new technologic revolution in medicine is based upon information technologies, and telemanipulation, telepresence and virtual reality are essential components. Telepresence surgery returns the look and feel of `open surgery' to the surgeon and promises enhancement of physical capabilities above normal human performance. Virtual reality provides basic medical education, simulation of surgical procedures, medical forces and disaster medicine practice, and virtual prototyping of medical equipment.
Wang, Bo; Sun, Bukuan
2017-03-01
The current study examined whether the effect of post-encoding emotional arousal on item memory extends to reality-monitoring source memory and, if so, whether the effect depends on emotionality of learning stimuli and testing format. In Experiment 1, participants encoded neutral words and imagined or viewed their corresponding object pictures. Then they watched a neutral, positive, or negative video. The 24-hour delayed test showed that emotional arousal had little effect on both item memory and reality-monitoring source memory. Experiment 2 was similar except that participants encoded neutral, positive, and negative words and imagined or viewed their corresponding object pictures. The results showed that positive and negative emotional arousal induced after encoding enhanced consolidation of item memory, but not reality-monitoring source memory, regardless of emotionality of learning stimuli. Experiment 3, identical to Experiment 2 except that participants were tested only on source memory for all the encoded items, still showed that post-encoding emotional arousal had little effect on consolidation of reality-monitoring source memory. Taken together, regardless of emotionality of learning stimuli and regardless of testing format of source memory (conjunction test vs. independent test), the facilitatory effect of post-encoding emotional arousal on item memory does not generalize to reality-monitoring source memory.
Spherical visual system for real-time virtual reality and surveillance
NASA Astrophysics Data System (ADS)
Chen, Su-Shing
1998-12-01
A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.
Reality check: the role of realism in stress reduction using media technology.
de Kort, Y A W; Ijsselsteijn, W A
2006-04-01
There is a growing interest in the use of virtual and other mediated environments for therapeutic purposes. However, in the domain of restorative environments, virtual reality (VR) technology has hardly been used. Here the tendency has been to use mediated real environments, striving for maximum visual realism. This use of photographic material is mainly based on research in aesthetics judgments that has demonstrated the validity of this type of simulations as representations of real environments. Thus, restoration therapy is developing under the untested assumption that photorealistic images have the optimal level of realism, while in therapeutic applications 'experiential realism' seems to be the key rather than visual realism. The present paper discusses this contrast and briefly describes data of three studies aimed at exploring the importance and meaning of realism in the context of restorative environments.
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
NASA Astrophysics Data System (ADS)
Alawa, Karam A.; Sayed, Mohamed; Arboleda, Alejandro; Durkee, Heather A.; Aguilar, Mariela C.; Lee, Richard K.
2017-02-01
Glaucoma is the leading cause of irreversible blindness worldwide. Due to its wide prevalence, effective screening tools are necessary. The purpose of this project is to design and evaluate a system that enables portable, cost effective, smartphone based visual field screening based on frequency doubling technology. The system is comprised of an Android smartphone to display frequency doubling stimuli and handle processing, a Bluetooth remote for user input, and a virtual reality headset to simulate the exam. The LG Nexus 5 smartphone and BoboVR Z3 virtual reality headset were used for their screen size and lens configuration, respectively. The system is capable of running the C-20, N-30, 24-2, and 30-2 testing patterns. Unlike the existing system, the smartphone FDT tests both eyes concurrently by showing the same background to both eyes but only displaying the stimulus to one eye at a time. Both the Humphrey Zeiss FDT and the smartphone FDT were tested on five subjects without a history of ocular disease with the C-20 testing pattern. The smartphone FDT successfully produced frequency doubling stimuli at the correct spatial and temporal frequency. Subjects could not tell which eye was being tested. All five subjects preferred the smartphone FDT to the Humphrey Zeiss FDT due to comfort and ease of use. The smartphone FDT is a low-cost, portable visual field screening device that can be used as a screening tool for glaucoma.
Augmented reality in neurovascular surgery: feasibility and first uses in the operating room.
Kersten-Oertel, Marta; Gerard, Ian; Drouin, Simon; Mok, Kelvin; Sirhan, Denis; Sinclair, David S; Collins, D Louis
2015-11-01
The aim of this report is to present a prototype augmented reality (AR) intra-operative brain imaging system. We present our experience of using this new neuronavigation system in neurovascular surgery and discuss the feasibility of this technology for aneurysms, arteriovenous malformations (AVMs), and arteriovenous fistulae (AVFs). We developed an augmented reality system that uses an external camera to capture the live view of the patient on the operating room table and to merge this view with pre-operative volume-rendered vessels. We have extensively tested the system in the laboratory and have used the system in four surgical cases: one aneurysm, two AVMs and one AVF case. The developed AR neuronavigation system allows for precise patient-to-image registration and calibration of the camera, resulting in a well-aligned augmented reality view. Initial results suggest that augmented reality is useful for tailoring craniotomies, localizing vessels of interest, and planning resection corridors. Augmented reality is a promising technology for neurovascular surgery. However, for more complex anomalies such as AVMs and AVFs, better visualization techniques that allow one to distinguish between arteries and veins and determine the absolute depth of a vessel of interest are needed.
NASA Astrophysics Data System (ADS)
Sudra, Gunther; Speidel, Stefanie; Fritz, Dominik; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2007-03-01
Minimally invasive surgery is a highly complex medical discipline with various risks for surgeon and patient, but has also numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate with these new problems, we propose to support the surgeon's spatial cognition by using augmented reality (AR) techniques to directly visualize virtual objects in the surgical site. In order to generate an intelligent support, it is necessary to have an intraoperative assistance system that recognizes the surgical skills during the intervention and provides context-aware assistance surgeon using AR techniques. With MEDIASSIST we bundle our research activities in the field of intraoperative intelligent support and visualization. Our experimental setup consists of a stereo endoscope, an optical tracking system and a head-mounted-display for 3D visualization. The framework will be used as platform for the development and evaluation of our research in the field of skill recognition and context-aware assistance generation. This includes methods for surgical skill analysis, skill classification, context interpretation as well as assistive visualization and interaction techniques. In this paper we present the objectives of MEDIASSIST and first results in the fields of skill analysis, visualization and multi-modal interaction. In detail we present a markerless instrument tracking for surgical skill analysis as well as visualization techniques and recognition of interaction gestures in an AR environment.
van den Heuvel, Maarten R C; van Wegen, Erwin E H; de Goede, Cees J T; Burgers-Bots, Ingrid A L; Beek, Peter J; Daffertshofer, Andreas; Kwakkel, Gert
2013-10-04
Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease. Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.
A Critical Review of the Use of Virtual Reality in Construction Engineering Education and Training.
Wang, Peng; Wu, Peng; Wang, Jun; Chi, Hung-Lin; Wang, Xiangyu
2018-06-08
Virtual Reality (VR) has been rapidly recognized and implemented in construction engineering education and training (CEET) in recent years due to its benefits of providing an engaging and immersive environment. The objective of this review is to critically collect and analyze the VR applications in CEET, aiming at all VR-related journal papers published from 1997 to 2017. The review follows a three-stage analysis on VR technologies, applications and future directions through a systematic analysis. It is found that the VR technologies adopted for CEET evolve over time, from desktop-based VR, immersive VR, 3D game-based VR, to Building Information Modelling (BIM)-enabled VR. A sibling technology, Augmented Reality (AR), for CEET adoptions has also emerged in recent years. These technologies have been applied in architecture and design visualization, construction health and safety training, equipment and operational task training, as well as structural analysis. Future research directions, including the integration of VR with emerging education paradigms and visualization technologies, have also been provided. The findings are useful for both researchers and educators to usefully integrate VR in their education and training programs to improve the training performance.
FlyAR: augmented reality supported micro aerial vehicle navigation.
Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard
2014-04-01
Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicles position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the users view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.
Testing and evaluation of a wearable augmented reality system for natural outdoor environments
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg
2013-05-01
This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.
NASA Astrophysics Data System (ADS)
Kim, Seoksoo; Jung, Sungmo; Song, Jae-Gu; Kang, Byong-Ho
As augmented reality and a gravity sensor is of growing interest, siginificant developement is being made on related technology, which allows application of the technology in a variety of areas with greater expectations. In applying Context-aware to augmented reality, it can make useful programs. A traning system suggested in this study helps a user to understand an effcienct training method using augmented reality and make sure if his exercise is being done propery based on the data collected by a gravity sensor. Therefore, this research aims to suggest an efficient training environment that can enhance previous training methods by applying augmented reality and a gravity sensor.
Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M
2017-07-01
The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.
Depth image enhancement using perceptual texture priors
NASA Astrophysics Data System (ADS)
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
Nifakos, Sokratis; Zary, Nabil
2014-01-01
The research community has called for the development of effective educational interventions for addressing prescription behaviour since antimicrobial resistance remains a global health issue. Examining the potential to displace the educational process from Personal Computers to Mobile devices, in this paper we investigated a new method of integration of Virtual Patients into Mobile devices with augmented reality technology, enriching the practitioner's education in prescription behavior. Moreover, we also explored which information are critical during the prescription behavior education and we visualized these information on real context with augmented reality technology, simultaneously with a running Virtual Patient's scenario. Following this process, we set the educational frame of experiential knowledge to a mixed (virtual and real) environment.
An augmented reality haptic training simulator for spinal needle procedures.
Sutherland, Colin; Hashtrudi-Zaad, Keyvan; Sellens, Rick; Abolmaesumi, Purang; Mousavi, Parvin
2013-11-01
This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system.
Building the bionic eye: an emerging reality and opportunity.
Merabet, Lotfi B
2011-01-01
Once the topic of folklore and science fiction, the notion of restoring vision to the blind is now approaching a tractable reality. Technological advances have inspired numerous multidisciplinary groups worldwide to develop visual neuroprosthetic devices that could potentially provide useful vision and improve the quality of life of profoundly blind individuals. While a variety of approaches and designs are being pursued, they all share a common principle of creating visual percepts through the stimulation of visual neural elements using appropriate patterns of electrical stimulation. Human clinical trials are now well underway and initial results have been met with a balance of excitement and cautious optimism. As remaining technical and surgical challenges continue to be solved and clinical trials move forward, we now enter a phase of development that requires careful consideration of a new set of issues. Establishing appropriate patient selection criteria, methods of evaluating long-term performance and effectiveness, and strategies to rehabilitate implanted patients will all need to be considered in order to achieve optimal outcomes and establish these devices as viable therapeutic options. Copyright © 2011 Elsevier B.V. All rights reserved.
Real-time recording and classification of eye movements in an immersive virtual environment.
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-10-10
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Real-time recording and classification of eye movements in an immersive virtual environment
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-01-01
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087
On-patient see-through augmented reality based on visual SLAM.
Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M
2017-01-01
An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.
Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori
2016-03-01
An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.
KinImmerse: Macromolecular VR for NMR ensembles
Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C
2009-01-01
Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844
NASA Astrophysics Data System (ADS)
Simonetto, E.; Froment, C.; Labergerie, E.; Ferré, G.; Séchet, B.; Chédorge, H.; Cali, J.; Polidori, L.
2013-07-01
Terrestrial Laser Scanning (TLS), 3-D modeling and its Web visualization are the three key steps needed to perform storage and grant-free and wide access to cultural heritage, as highlighted in many recent examples. The goal of this study is to set up 3-D Web resources for "virtually" visiting the exterior of the Abbaye de l'Epau, an old French abbey which has both a rich history and delicate architecture. The virtuality is considered in two ways: the flowing navigation in a virtual reality environment around the abbey and a game activity using augmented reality. First of all, the data acquisition consists in GPS and tacheometry survey, terrestrial laser scanning and photography acquisition. After data pre-processing, the meshed and textured 3-D model is generated using 3-D Reshaper commercial software. The virtual reality visit and augmented reality animation are then created using Unity software. This work shows the interest of such tools in bringing out the regional cultural heritage and making it attractive to the public.
Augmenting the thermal flux experiment: A mixed reality approach with the HoloLens
NASA Astrophysics Data System (ADS)
Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.
2017-09-01
In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted displays, which allow one to embed virtual objects into the real surroundings, leading to a Mixed Reality (MR) experience. In such an environment, digital and real objects do not only coexist, but moreover are also able to interact with each other in real time. These concepts can be used to merge human perception of reality with digitally visualized sensor data, thereby making the invisible visible. As a first example, in this paper we introduce alongside the basic idea of this column an MR experiment in thermodynamics for a laboratory course for freshman students in physics or other science and engineering subjects that uses physical data from mobile devices for analyzing and displaying physical phenomena to students.
Evaluation of Postural Control in Patients with Glaucoma Using a Virtual Reality Environment.
Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A
2015-06-01
To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in patients with glaucoma. Cross-sectional study. The study involved 42 patients with glaucoma with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Torque moments around the center of foot pressure on the force platform were measured, and the standard deviations of the torque moments (STD) were calculated as a measurement of postural stability and reported in Newton meters (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Patients with glaucoma had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) and rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared with those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with a history of falls in patients with glaucoma (incidence rate ratio, 1.85; 95% confidence interval, 1.30-2.63; P = 0.001). The study presented and validated a novel paradigm for evaluation of balance control in patients with glaucoma on the basis of the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with a history of falls and may help to provide a better understanding of balance control in patients with glaucoma. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluation of Postural Control in Glaucoma Patients Using a Virtual 1 Reality Environment
Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A.
2015-01-01
Purpose To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in glaucoma patients. Design Cross-sectional study. Participants The study involved 42 glaucoma patients with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Methods Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Main Outcome Measures Torque moments around the center of foot pressure on the force platform were measured and the standard deviations (STD) of these torque moments were calculated as a measurement of postural stability and reported in Newton meter (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Results Glaucoma patients had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) as well as rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared to those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with history of falls in glaucoma patients (incidence-rate ratio = 1.85; 95% CI: 1.30 – 2.63; P = 0.001). Conclusions The study presented and validated a novel paradigm for evaluation of balance control in glaucoma patients based on the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with history of falls and may help to provide a better understanding of balance control in glaucoma patients. PMID:25892017
Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality
NASA Astrophysics Data System (ADS)
Parmar, Dhaval
Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational concepts as a future direction, and presents the challenges faced in implementing the IEIVR metaphor due to extended periods of immersion. Results from the conducted studies help in formulating guidelines for virtual reality and education researchers working in STEM education and training, and for educators and curriculum developers seeking to improve student engagement in the STEM fields.
Mobile Technologies and Augmented Reality in Open Education
ERIC Educational Resources Information Center
Kurubacak, Gulsun, Ed.; Altinpulluk, Hakan, Ed.
2017-01-01
Novel trends and innovations have enhanced contemporary educational environments. When applied properly, these computing advances can create enriched learning opportunities for students. "Mobile Technologies and Augmented Reality in Open Education" is a pivotal reference source for the latest academic research on the integration of…
2012-04-23
Interactive Virtual Hair Salon , Presence, (05 2007): 237. doi: 2012/04/17 12:55:26 31 Theodore Kim, Jason Sewall, Avneesh Sud, Ming Lin. Fast...in Games , Utrecht, Netherlands, Nov. 2009. Keynote Speaker, IADIS International Conference on Computer Graphics and Visualization, Portugal, June 2009...Keynote Speaker, ACM Symposium on Virtual Reality Software and Technology, Bordeaux, France, October 2008. Invited Speaker, Motion in Games , Utrecht
Endogenous Biologically Inspired Art of Complex Systems.
Ji, Haru; Wakefield, Graham
2016-01-01
Since 2007, Graham Wakefield and Haru Ji have looked to nature for inspiration as they have created a series of "artificial natures," or interactive visualizations of biologically inspired complex systems that can evoke nature-like aesthetic experiences within mixed-reality art installations. This article describes how they have applied visualization, sonification, and interaction design in their work with artificial ecosystems and organisms using specific examples from their exhibited installations.
NASA Astrophysics Data System (ADS)
Ratamero, Erick Martins; Bellini, Dom; Dowson, Christopher G.; Römer, Rudolf A.
2018-06-01
The ability to precisely visualize the atomic geometry of the interactions between a drug and its protein target in structural models is critical in predicting the correct modifications in previously identified inhibitors to create more effective next generation drugs. It is currently common practice among medicinal chemists while attempting the above to access the information contained in three-dimensional structures by using two-dimensional projections, which can preclude disclosure of useful features. A more accessible and intuitive visualization of the three-dimensional configuration of the atomic geometry in the models can be achieved through the implementation of immersive virtual reality (VR). While bespoke commercial VR suites are available, in this work, we present a freely available software pipeline for visualising protein structures through VR. New consumer hardware, such as the uc(HTC Vive) and the uc(Oculus Rift) utilized in this study, are available at reasonable prices. As an instructive example, we have combined VR visualization with fast algorithms for simulating intramolecular motions of protein flexibility, in an effort to further improve structure-led drug design by exposing molecular interactions that might be hidden in the less informative static models. This is a paradigmatic test case scenario for many similar applications in computer-aided molecular studies and design.
Ratamero, Erick Martins; Bellini, Dom; Dowson, Christopher G; Römer, Rudolf A
2018-06-07
The ability to precisely visualize the atomic geometry of the interactions between a drug and its protein target in structural models is critical in predicting the correct modifications in previously identified inhibitors to create more effective next generation drugs. It is currently common practice among medicinal chemists while attempting the above to access the information contained in three-dimensional structures by using two-dimensional projections, which can preclude disclosure of useful features. A more accessible and intuitive visualization of the three-dimensional configuration of the atomic geometry in the models can be achieved through the implementation of immersive virtual reality (VR). While bespoke commercial VR suites are available, in this work, we present a freely available software pipeline for visualising protein structures through VR. New consumer hardware, such as the HTC VIVE and the OCULUS RIFT utilized in this study, are available at reasonable prices. As an instructive example, we have combined VR visualization with fast algorithms for simulating intramolecular motions of protein flexibility, in an effort to further improve structure-led drug design by exposing molecular interactions that might be hidden in the less informative static models. This is a paradigmatic test case scenario for many similar applications in computer-aided molecular studies and design.
Use of cues in virtual reality depends on visual feedback.
Fulvio, Jacqueline M; Rokers, Bas
2017-11-22
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U
2016-01-01
The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.
Wolle, Patrik; Müller, Matthias P; Rauh, Daniel
2018-03-16
The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
Virtual Reality Used to Serve the Glenn Engineering Community
NASA Technical Reports Server (NTRS)
Carney, Dorothy V.
2001-01-01
There are a variety of innovative new visualization tools available to scientists and engineers for the display and analysis of their models. At the NASA Glenn Research Center, we have an ImmersaDesk, a large, single-panel, semi-immersive display device. This versatile unit can interactively display three-dimensional images in visual stereo. Our challenge is to make this virtual reality platform accessible and useful to researchers. An example of a successful application of this computer technology is the display of blade out simulations. NASA Glenn structural dynamicists, Dr. Kelly Carney and Dr. Charles Lawrence, funded by the Ultra Safe Propulsion Project under Base R&T, are researching blade outs, when turbine engines lose a fan blade during operation. Key objectives of this research include minimizing danger to the aircraft via effective blade containment, predicting destructive loads due to the imbalance following a blade loss, and identifying safe, cost-effective designs and materials for future engines.
A review of haptic simulator for oral and maxillofacial surgery based on virtual reality.
Chen, Xiaojun; Hu, Junlei
2018-06-01
Traditional medical training in oral and maxillofacial surgery (OMFS) may be limited by its low efficiency and high price due to the shortage of cadaver resources. With the combination of visual rendering and feedback force, surgery simulators become increasingly popular in hospitals and medical schools as an alternative to the traditional training. Areas covered: The major goal of this review is to provide a comprehensive reference source of current and future developments of haptic OMFS simulators based on virtual reality (VR) for relevant researchers. Expert commentary: Visual rendering, haptic rendering, tissue deformation, and evaluation are key components of haptic surgery simulator based on VR. Compared with traditional medical training, virtual and tactical fusion of virtual environment in surgery simulator enables considerably vivid sensation, and the operators have more opportunities to practice surgical skills and receive objective evaluation as reference.
Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko
2012-01-01
Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.
Kotranza, Aaron; Lind, D Scott; Lok, Benjamin
2012-07-01
We investigate the efficacy of incorporating real-time feedback of user performance within mixed-reality environments (MREs) for training real-world tasks with tightly coupled cognitive and psychomotor components. This paper presents an approach to providing real-time evaluation and visual feedback of learner performance in an MRE for training clinical breast examination (CBE). In a user study of experienced and novice CBE practitioners (n = 69), novices receiving real-time feedback performed equivalently or better than more experienced practitioners in the completeness and correctness of the exam. A second user study (n = 8) followed novices through repeated practice of CBE in the MRE. Results indicate that skills improvement in the MRE transfers to the real-world task of CBE of human patients. This initial case study demonstrates the efficacy of MREs incorporating real-time feedback for training real-world cognitive-psychomotor tasks.
Neglect assessment as an application of virtual reality.
Broeren, J; Samuelsson, H; Stibrant-Sunnerhagen, K; Blomstrand, C; Rydmark, M
2007-09-01
In this study a cancellation task in a virtual environment was applied to describe the pattern of search and the kinematics of hand movements in eight patients with right hemisphere stroke. Four of these patients had visual neglect and four had recovered clinically from initial symptoms of neglect. The performance of the patients was compared with that of a control group consisting of eight subjects with no history of neurological deficits. Patients with neglect as well as patients clinically recovered from neglect showed aberrant search performance in the virtual reality (VR) task, such as mixed search pattern, repeated target pressures and deviating hand movements. The results indicate that in patients with a right hemispheric stroke, this VR application can provide an additional tool for assessment that can identify small variations otherwise not detectable with standard paper-and-pencil tests. VR technology seems to be well suited for the assessment of visually guided manual exploration in space.
A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.
Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson
2017-01-01
In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.
Catching fly balls in virtual reality: a critical test of the outfielder problem.
Fink, Philip W; Foo, Patrick S; Warren, William H
2009-12-14
How does a baseball outfielder know where to run to catch a fly ball? The "outfielder problem" remains unresolved, and its solution would provide a window into the visual control of action. It may seem obvious that human action is based on an internal model of the physical world, such that the fielder predicts the landing point based on a mental model of the ball's trajectory (TP). However, two alternative theories, Optical Acceleration Cancellation (OAC) and Linear Optical Trajectory (LOT), propose that fielders are led to the right place at the right time by coupling their movements to visual information in a continuous "online" manner. All three theories predict successful catches and similar running paths. We provide a critical test by using virtual reality to perturb the vertical motion of the ball in mid-flight. The results confirm the predictions of OAC but are at odds with LOT and TP.
Virtual reality in surgical skills training.
Palter, Vanessa N; Grantcharov, Teodor P
2010-06-01
With recent concerns regarding patient safety, and legislation regarding resident work hours, it is accepted that a certain amount of surgical skills training will transition to the surgical skills laboratory. Virtual reality offers enormous potential to enhance technical and non-technical skills training outside the operating room. Virtual-reality systems range from basic low-fidelity devices to highly complex virtual environments. These systems can act as training and assessment tools, with the learned skills effectively transferring to an analogous clinical situation. Recent developments include expanding the role of virtual reality to allow for holistic, multidisciplinary team training in simulated operating rooms, and focusing on the role of virtual reality in evidence-based surgical curriculum design. Copyright 2010 Elsevier Inc. All rights reserved.
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A virtual reality browser for Space Station models
NASA Technical Reports Server (NTRS)
Goldsby, Michael; Pandya, Abhilash; Aldridge, Ann; Maida, James
1993-01-01
The Graphics Analysis Facility at NASA/JSC has created a visualization and learning tool by merging its database of detailed geometric models with a virtual reality system. The system allows an interactive walk-through of models of the Space Station and other structures, providing detailed realistic stereo images. The user can activate audio messages describing the function and connectivity of selected components within his field of view. This paper presents the issues and trade-offs involved in the implementation of the VR system and discusses its suitability for its intended purposes.
Data Visualization Using Immersive Virtual Reality Tools
NASA Astrophysics Data System (ADS)
Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.
2013-01-01
The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.
Shema-Shiratzky, Shirley; Brozgol, Marina; Cornejo-Thumm, Pablo; Geva-Dayan, Karen; Rotstein, Michael; Leitner, Yael; Hausdorff, Jeffrey M; Mirelman, Anat
2018-05-17
To examine the feasibility and efficacy of a combined motor-cognitive training using virtual reality to enhance behavior, cognitive function and dual-tasking in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Fourteen non-medicated school-aged children with ADHD, received 18 training sessions during 6 weeks. Training included walking on a treadmill while negotiating virtual obstacles. Behavioral symptoms, cognition and gait were tested before and after the training and at 6-weeks follow-up. Based on parental report, there was a significant improvement in children's social problems and psychosomatic behavior after the training. Executive function and memory were improved post-training while attention was unchanged. Gait regularity significantly increased during dual-task walking. Long-term training effects were maintained in memory and executive function. Treadmill-training augmented with virtual-reality is feasible and may be an effective treatment to enhance behavior, cognitive function and dual-tasking in children with ADHD.
NASA Astrophysics Data System (ADS)
Yu, K. C.; Raynolds, R. G.; Dechesne, M.
2008-12-01
New visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. We have impacted the community through topical policy presentations at both state and city levels, adult education classes at the Denver Museum of Nature and Science (DMNS), and public lectures at DMNS. We have constructed three-dimensional models from well data and surface observations which allow policy makers to better understand the distribution of groundwater in sandstone aquifers of the Denver Basin. Our presentations to local governments in the Denver metro area have allowed resource managers to better project future ground water depletion patterns, and to encourage development of alternative sources. DMNS adult education classes on water resources, geography, and regional geology, as well as public lectures on global issues such as earthquakes, tsunamis, and resource depletion, have utilized the visualizations developed from these research models. In addition to presenting GIS models in traditional lectures, we have also made use of the immersive display capabilities of the digital "fulldome" Gates Planetarium at DMNS. The real-time Uniview visualization application installed at Gates was designed for teaching astronomy, but it can be re-purposed for displaying our model datasets in the context of the Earth's surface. The 17-meter diameter dome of the Gates Planetarium allows an audience to have an immersive experience---similar to virtual reality CAVEs employed by the oil exploration industry---that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend dynamically- changing geospatial datasets in an exciting and engaging fashion. In our presentation, we will demonstrate how new software tools like Uniview can be used to dramatically enhance and accelerate public comprehension of complex, multi-scale geospatial phenomena.
Using augmented reality to teach and learn biochemistry.
Vega Garzón, Juan Carlos; Magrini, Marcio Luiz; Galembeck, Eduardo
2017-09-01
Understanding metabolism and metabolic pathways constitutes one of the central aims for students of biological sciences. Learning metabolic pathways should be focused on the understanding of general concepts and core principles. New technologies such Augmented Reality (AR) have shown potential to improve assimilation of biochemistry abstract concepts because students can manipulate 3D molecules in real time. Here we describe an application named Augmented Reality Metabolic Pathways (ARMET), which allowed students to visualize the 3D molecular structure of substrates and products, thus perceiving changes in each molecule. The structural modification of molecules shows students the flow and exchange of compounds and energy through metabolism. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):417-420, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.
Virtual Reality: You Are There
NASA Technical Reports Server (NTRS)
1993-01-01
Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.
Deutsch, Judith E
2009-01-01
Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.
Lee, Hyung Young; Kim, You Lim; Lee, Suk Min
2015-06-01
[Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training.
Lee, Hyung Young; Kim, You Lim; Lee, Suk Min
2015-01-01
[Purpose] This study aimed to investigate the clinical effects of virtual reality-based training and task-oriented training on balance performance in stroke patients. [Subjects and Methods] The subjects were randomly allocated to 2 groups: virtual reality-based training group (n = 12) and task-oriented training group (n = 12). The patients in the virtual reality-based training group used the Nintendo Wii Fit Plus, which provided visual and auditory feedback as well as the movements that enabled shifting of weight to the right and left sides, for 30 min/day, 3 times/week for 6 weeks. The patients in the task-oriented training group practiced additional task-oriented programs for 30 min/day, 3 times/week for 6 weeks. Patients in both groups also underwent conventional physical therapy for 60 min/day, 5 times/week for 6 weeks. [Results] Balance and functional reach test outcomes were examined in both groups. The results showed that the static balance and functional reach test outcomes were significantly higher in the virtual reality-based training group than in the task-oriented training group. [Conclusion] This study suggested that virtual reality-based training might be a more feasible and suitable therapeutic intervention for dynamic balance in stroke patients compared to task-oriented training. PMID:26180341
Tang, Rui; Ma, Long-Fei; Rong, Zhi-Xia; Li, Mo-Dan; Zeng, Jian-Ping; Wang, Xue-Dong; Liao, Hong-En; Dong, Jia-Hong
2018-04-01
Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods. Copyright © 2018 First Affiliated Hospital, Zhejiang University School of Medicine in China. Published by Elsevier B.V. All rights reserved.
Examining Young Children's Perception toward Augmented Reality-Infused Dramatic Play
ERIC Educational Resources Information Center
Han, Jeonghye; Jo, Miheon; Hyun, Eunja; So, Hyo-jeong
2015-01-01
Amid the increasing interest in applying augmented reality (AR) in educational settings, this study explores the design and enactment of an AR-infused robot system to enhance children's satisfaction and sensory engagement with dramatic play activities. In particular, we conducted an exploratory study to empirically examine children's perceptions…
Virtual Reality Social Cognition Training for Young Adults with High-Functioning Autism
ERIC Educational Resources Information Center
Kandalaft, Michelle R.; Didehbani, Nyaz; Krawczyk, Daniel C.; Allen, Tandra T.; Chapman, Sandra B.
2013-01-01
Few evidence-based social interventions exist for young adults with high-functioning autism, many of whom encounter significant challenges during the transition into adulthood. The current study investigated the feasibility of an engaging Virtual Reality Social Cognition Training intervention focused on enhancing social skills, social cognition,…
Integrating Augmented Reality Technology to Enhance Children's Learning in Marine Education
ERIC Educational Resources Information Center
Lu, Su-Ju; Liu, Ying-Chieh
2015-01-01
Marine education comprises rich and multifaceted issues. Raising general awareness of marine environments and issues demands the development of new learning materials. This study adapts concepts from digital game-based learning to design an innovative marine learning program integrating augmented reality (AR) technology for lower grade primary…
Peperkorn, Henrik M.; Diemer, Julia E.; Alpers, Georg W.; Mühlberger, Andreas
2016-01-01
Embodiment (i.e., the involvement of a bodily representation) is thought to be relevant in emotional experiences. Virtual reality (VR) is a capable means of activating phobic fear in patients. The representation of the patient’s body (e.g., the right hand) in VR enhances immersion and increases presence, but its effect on phobic fear is still unknown. We analyzed the influence of the presentation of the participant’s hand in VR on presence and fear responses in 32 women with spider phobia and 32 matched controls. Participants sat in front of a table with an acrylic glass container within reaching distance. During the experiment this setup was concealed by a head-mounted display (HMD). The VR scenario presented via HMD showed the same setup, i.e., a table with an acrylic glass container. Participants were randomly assigned to one of two experimental groups. In one group, fear responses were triggered by fear-relevant visual input in VR (virtual spider in the virtual acrylic glass container), while information about a real but unseen neutral control animal (living snake in the acrylic glass container) was given. The second group received fear-relevant information of the real but unseen situation (living spider in the acrylic glass container), but visual input was kept neutral VR (virtual snake in the virtual acrylic glass container). Participants were instructed to touch the acrylic glass container with their right hand in 20 consecutive trials. Visibility of the hand was varied randomly in a within-subjects design. We found for all participants that visibility of the participant’s hand increased presence independently of the fear trigger. However, in patients, the influence of the virtual hand on fear depended on the fear trigger. When fear was triggered perceptually, i.e., by a virtual spider, the virtual hand increased fear. When fear was triggered by information about a real spider, the virtual hand had no effect on fear. Our results shed light on the significance of different fear triggers (visual, conceptual) in interaction with body representations. PMID:26973566
Peperkorn, Henrik M; Diemer, Julia E; Alpers, Georg W; Mühlberger, Andreas
2016-01-01
Embodiment (i.e., the involvement of a bodily representation) is thought to be relevant in emotional experiences. Virtual reality (VR) is a capable means of activating phobic fear in patients. The representation of the patient's body (e.g., the right hand) in VR enhances immersion and increases presence, but its effect on phobic fear is still unknown. We analyzed the influence of the presentation of the participant's hand in VR on presence and fear responses in 32 women with spider phobia and 32 matched controls. Participants sat in front of a table with an acrylic glass container within reaching distance. During the experiment this setup was concealed by a head-mounted display (HMD). The VR scenario presented via HMD showed the same setup, i.e., a table with an acrylic glass container. Participants were randomly assigned to one of two experimental groups. In one group, fear responses were triggered by fear-relevant visual input in VR (virtual spider in the virtual acrylic glass container), while information about a real but unseen neutral control animal (living snake in the acrylic glass container) was given. The second group received fear-relevant information of the real but unseen situation (living spider in the acrylic glass container), but visual input was kept neutral VR (virtual snake in the virtual acrylic glass container). Participants were instructed to touch the acrylic glass container with their right hand in 20 consecutive trials. Visibility of the hand was varied randomly in a within-subjects design. We found for all participants that visibility of the participant's hand increased presence independently of the fear trigger. However, in patients, the influence of the virtual hand on fear depended on the fear trigger. When fear was triggered perceptually, i.e., by a virtual spider, the virtual hand increased fear. When fear was triggered by information about a real spider, the virtual hand had no effect on fear. Our results shed light on the significance of different fear triggers (visual, conceptual) in interaction with body representations.
AI applications to conceptual aircraft design
NASA Technical Reports Server (NTRS)
Chalfan, Kathryn M.
1990-01-01
This paper presents in viewgraph form several applications of artificial intelligence (AI) to the conceptual design of aircraft, including: an access manager for automated data management, AI techniques applied to optimization, and virtual reality for scientific visualization of the design prototype.
ERIC Educational Resources Information Center
Weber, Michael R.
2003-01-01
Superintendent offers several practical suggestions for dealing more effectively with negative school personnel and situations: Visualize success, know the realities, appreciate humor, surround negative people with positive staff members, be an absolute role model, understand psychology, reframe negative into positive energy, and use your…
Implementation of augmented reality to models sultan deli
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.
2018-03-01
Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.
Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery.
Pelargos, Panayiotis E; Nagasawa, Daniel T; Lagman, Carlito; Tenn, Stephen; Demos, Joanna V; Lee, Seung J; Bui, Timothy T; Barnette, Natalie E; Bhatt, Nikhilesh S; Ung, Nolan; Bari, Ausaf; Martin, Neil A; Yang, Isaac
2017-01-01
Neurosurgery has undergone a technological revolution over the past several decades, from trephination to image-guided navigation. Advancements in virtual reality (VR) and augmented reality (AR) represent some of the newest modalities being integrated into neurosurgical practice and resident education. In this review, we present a historical perspective of the development of VR and AR technologies, analyze its current uses, and discuss its emerging applications in the field of neurosurgery. Copyright © 2016 Elsevier Ltd. All rights reserved.
AR4VI: AR as an Accessibility Tool for People with Visual Impairments
Coughlan, James M.; Miele, Joshua
2017-01-01
Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness – an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well. PMID:29303163
AR4VI: AR as an Accessibility Tool for People with Visual Impairments.
Coughlan, James M; Miele, Joshua
2017-10-01
Although AR technology has been largely dominated by visual media, a number of AR tools using both visual and auditory feedback have been developed specifically to assist people with low vision or blindness - an application domain that we term Augmented Reality for Visual Impairment (AR4VI). We describe two AR4VI tools developed at Smith-Kettlewell, as well as a number of pre-existing examples. We emphasize that AR4VI is a powerful tool with the potential to remove or significantly reduce a range of accessibility barriers. Rather than being restricted to use by people with visual impairments, AR4VI is a compelling universal design approach offering benefits for mainstream applications as well.
Cross-species 3D virtual reality toolbox for visual and cognitive experiments.
Doucet, Guillaume; Gulli, Roberto A; Martinez-Trujillo, Julio C
2016-06-15
Although simplified visual stimuli, such as dots or gratings presented on homogeneous backgrounds, provide strict control over the stimulus parameters during visual experiments, they fail to approximate visual stimulation in natural conditions. Adoption of virtual reality (VR) in neuroscience research has been proposed to circumvent this problem, by combining strict control of experimental variables and behavioral monitoring within complex and realistic environments. We have created a VR toolbox that maximizes experimental flexibility while minimizing implementation costs. A free VR engine (Unreal 3) has been customized to interface with any control software via text commands, allowing seamless introduction into pre-existing laboratory data acquisition frameworks. Furthermore, control functions are provided for the two most common programming languages used in visual neuroscience: Matlab and Python. The toolbox offers milliseconds time resolution necessary for electrophysiological recordings and is flexible enough to support cross-species usage across a wide range of paradigms. Unlike previously proposed VR solutions whose implementation is complex and time-consuming, our toolbox requires minimal customization or technical expertise to interface with pre-existing data acquisition frameworks as it relies on already familiar programming environments. Moreover, as it is compatible with a variety of display and input devices, identical VR testing paradigms can be used across species, from rodents to humans. This toolbox facilitates the addition of VR capabilities to any laboratory without perturbing pre-existing data acquisition frameworks, or requiring any major hardware changes. Copyright © 2016 Z. All rights reserved.
Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter
2018-01-01
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Graphical user interface concepts for tactical augmented reality
NASA Astrophysics Data System (ADS)
Argenta, Chris; Murphy, Anne; Hinton, Jeremy; Cook, James; Sherrill, Todd; Snarski, Steve
2010-04-01
Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program†. Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360° representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.
Salimi, Zohreh; Ferguson-Pell, Martin
2018-06-01
Although wheelchair ergometers provide a safe and controlled environment for studying or training wheelchair users, until recently they had a major disadvantage in only being capable of simulating straight-line wheelchair propulsion. Virtual reality has helped overcome this problem and broaden the usability of wheelchair ergometers. However, for a wheelchair ergometer to be validly used in research studies, it needs to be able to simulate the biomechanics of real world wheelchair propulsion. In this paper, three versions of a wheelchair simulator were developed. They provide a sophisticated wheelchair ergometer in an immersive virtual reality environment. They are intended for manual wheelchair propulsion and all are able to simulate simple translational inertia. In addition, each of the systems reported uses a different approach to simulate wheelchair rotation and accommodate rotational inertial effects. The first system does not provide extra resistance against rotation and relies on merely linear inertia, hypothesizing that it can provide acceptable replication of biomechanics of wheelchair maneuvers. The second and third systems, however, are designed to simulate rotational inertia. System II uses mechanical compensation, and System III uses visual compensation simulating the influence that rotational inertia has on the visual perception of wheelchair movement in response to rotation at different speeds.
A 3-D mixed-reality system for stereoscopic visualization of medical dataset.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
2009-11-01
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
Sun, Guo-Chen; Wang, Fei; Chen, Xiao-Lei; Yu, Xin-Guang; Ma, Xiao-Dong; Zhou, Ding-Biao; Zhu, Ru-Yuan; Xu, Bai-Nan
2016-12-01
The utility of virtual and augmented reality based on functional neuronavigation and intraoperative magnetic resonance imaging (MRI) for glioma surgery has not been previously investigated. The study population consisted of 79 glioma patients and 55 control subjects. Preoperatively, the lesion and related eloquent structures were visualized by diffusion tensor tractography and blood oxygen level-dependent functional MRI. Intraoperatively, microscope-based functional neuronavigation was used to integrate the reconstructed eloquent structure and the real head and brain, which enabled safe resection of the lesion. Intraoperative MRI was used to verify brain shift during the surgical process and provided quality control during surgery. The control group underwent surgery guided by anatomic neuronavigation. Virtual and augmented reality protocols based on functional neuronavigation and intraoperative MRI provided useful information for performing tailored and optimized surgery. Complete resection was achieved in 55 of 79 (69.6%) glioma patients and 20 of 55 (36.4%) control subjects, with average resection rates of 95.2% ± 8.5% and 84.9% ± 15.7%, respectively. Both the complete resection rate and average extent of resection differed significantly between the 2 groups (P < 0.01). Postoperatively, the rate of preservation of neural functions (motor, visual field, and language) was lower in controls than in glioma patients at 2 weeks and 3 months (P < 0.01). Combining virtual and augmented reality based on functional neuronavigation and intraoperative MRI can facilitate resection of gliomas involving eloquent areas. Copyright © 2016 Elsevier Inc. All rights reserved.
Virtual reality applications in improving postural control and minimizing falls.
Virk, Sumandeep; McConville, Kristiina M Valter
2006-01-01
Maintaining balance under all conditions is an absolute requirement for humans. Orientation in space and balance maintenance requires inputs from the vestibular, the visual, the proprioceptive and the somatosensory systems. All the cues coming from these systems are integrated by the central nervous system (CNS) to employ different strategies for orientation and balance. How the CNS integrates all the inputs and makes cognitive decisions about balance strategies has been an area of interest for biomedical engineers for a long time. More interesting is the fact that in the absence of one or more cues, or when the input from one of the sensors is skewed, the CNS "adapts" to the new environment and gives less weight to the conflicting inputs [1]. The focus of this paper is a review of different strategies and models put forward by researchers to explain the integration of these sensory cues. Also, the paper compares the different approaches used by young and old adults in maintaining balance. Since with age the musculoskeletal, visual and vestibular system deteriorates, the older subjects have to compensate for these impaired sensory cues for postural stability. The paper also discusses the applications of virtual reality in rehabilitation programs not only for balance in the elderly but also in occupational falls. Virtual reality has profound applications in the field of balance rehabilitation and training because of its relatively low cost. Studies will be conducted to evaluate the effectiveness of virtual reality training in modifying the head and eye movement strategies, and determine the role of these responses in the maintenance of balance.
Virtual reality in ophthalmology training.
Khalifa, Yousuf M; Bogorad, David; Gibson, Vincent; Peifer, John; Nussbaum, Julian
2006-01-01
Current training models are limited by an unstructured curriculum, financial costs, human costs, and time constraints. With the newly mandated resident surgical competency, training programs are struggling to find viable methods of assessing and documenting the surgical skills of trainees. Virtual-reality technologies have been used for decades in flight simulation to train and assess competency, and there has been a recent push in surgical specialties to incorporate virtual-reality simulation into residency programs. These efforts have culminated in an FDA-approved carotid stenting simulator. What role virtual reality will play in the evolution of ophthalmology surgical curriculum is uncertain. The current apprentice system has served the art of surgery for over 100 years, and we foresee virtual reality working synergistically with our current curriculum modalities to streamline and enhance the resident's learning experience.
Virtual reality and hallucination: a technoetic perspective
NASA Astrophysics Data System (ADS)
Slattery, Diana R.
2008-02-01
Virtual Reality (VR), especially in a technologically focused discourse, is defined by a class of hardware and software, among them head-mounted displays (HMDs), navigation and pointing devices; and stereoscopic imaging. This presentation examines the experiential aspect of VR. Putting "virtual" in front of "reality" modifies the ontological status of a class of experience-that of "reality." Reality has also been modified [by artists, new media theorists, technologists and philosophers] as augmented, mixed, simulated, artificial, layered, and enhanced. Modifications of reality are closely tied to modifications of perception. Media theorist Roy Ascott creates a model of three "VR's": Verifiable Reality, Virtual Reality, and Vegetal (entheogenically induced) Reality. The ways in which we shift our perceptual assumptions, create and verify illusions, and enter "the willing suspension of disbelief" that allows us entry into imaginal worlds is central to the experience of VR worlds, whether those worlds are explicitly representational (robotic manipulations by VR) or explicitly imaginal (VR artistic creations). The early rhetoric surrounding VR was interwoven with psychedelics, a perception amplified by Timothy Leary's presence on the historic SIGGRAPH panel, and the Wall Street Journal's tag of VR as "electronic LSD." This paper discusses the connections-philosophical, social-historical, and psychological-perceptual between these two domains.
Increase of frontal neuronal activity in chronic neglect after training in virtual reality.
Ekman, U; Fordell, H; Eriksson, J; Lenfeldt, N; Wåhlin, A; Eklund, A; Malm, J
2018-05-16
A third of patients with stroke acquire spatial neglect associated with poor rehabilitation outcome. New effective rehabilitation interventions are needed. Scanning training combined with multisensory stimulation to enhance the rehabilitation effect is suggested. In accordance, we have designed a virtual-reality based scanning training that combines visual, audio and sensori-motor stimulation called RehAtt ® . Effects were shown in behavioural tests and activity of daily living. Here, we use fMRI to evaluate the change in brain activity during Posner's Cuing Task (attention task) after RehAtt ® intervention, in patients with chronic neglect. Twelve patients (mean age = 72.7 years, SD = 6.1) with chronic neglect (persistent symptoms >6 months) performed the interventions 3 times/wk during 5 weeks, in total 15 hours. Training effects on brain activity were evaluated using fMRI task-evoked responses during the Posner's cuing task before and after the intervention. Patients improved their performance in the Posner fMRI task. In addition, patients increased their task-evoked brain activity after the VR interventions in an extended network including pre-frontal and temporal cortex during attentional cueing, but showed no training effects during target presentations. The current pilot study demonstrates that a novel multisensory VR intervention has the potential to benefit patients with chronic neglect in respect of behaviour and brain changes. Specifically, the fMRI results show that strategic processes (top-down control during attentional cuing) were enhanced by the intervention. The findings increase knowledge of the plasticity processes underlying positive rehabilitation effects from RehAtt ® in chronic neglect. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hoermann, Simon; Ferreira Dos Santos, Luara; Morkisch, Nadine; Jettkowski, Katrin; Sillis, Moran; Devan, Hemakumar; Kanagasabai, Parimala S; Schmidt, Henning; Krüger, Jörg; Dohle, Christian; Regenbrecht, Holger; Hale, Leigh; Cutfield, Nicholas J
2017-07-01
New rehabilitation strategies for post-stroke upper limb rehabilitation employing visual stimulation show promising results, however, cost-efficient and clinically feasible ways to provide these interventions are still lacking. An integral step is to translate recent technological advances, such as in virtual and augmented reality, into therapeutic practice to improve outcomes for patients. This requires research on the adaptation of the technology for clinical use as well as on the appropriate guidelines and protocols for sustainable integration into therapeutic routines. Here, we present and evaluate a novel and affordable augmented reality system (Augmented Reflection Technology, ART) in combination with a validated mirror therapy protocol for upper limb rehabilitation after stroke. We evaluated components of the therapeutic intervention, from the patients' and the therapists' points of view in a clinical feasibility study at a rehabilitation centre. We also assessed the integration of ART as an adjunct therapy for the clinical rehabilitation of subacute patients at two different hospitals. The results showed that the combination and application of the Berlin Protocol for Mirror Therapy together with ART was feasible for clinical use. This combination was integrated into the therapeutic plan of subacute stroke patients at the two clinical locations where the second part of this research was conducted. Our findings pave the way for using technology to provide mirror therapy in clinical settings and show potential for the more effective use of inpatient time and enhanced recoveries for patients. Implications for Rehabilitation Computerised Mirror Therapy is feasible for clinical use Augmented Reflection Technology can be integrated as an adjunctive therapeutic intervention for subacute stroke patients in an inpatient setting Virtual Rehabilitation devices such as Augmented Reflection Technology have considerable potential to enhance stroke rehabilitation.
NASA Astrophysics Data System (ADS)
Halik, Łukasz
2012-11-01
The objective of the present deliberations was to systematise our knowledge of static visual variables used to create cartographic symbols, and also to analyse the possibility of their utilisation in the Augmented Reality (AR) applications on smartphone-type mobile devices. This was accomplished by combining the visual variables listed over the years by different researchers. Research approach was to determine the level of usefulness of particular characteristics of visual variables such as selective, associative, quantitative and order. An attempt was made to provide an overview of static visual variables and to describe the AR system which is a new paradigm of the user interface. Changing the approach to the presentation of point objects is caused by applying different perspective in the observation of objects (egocentric view) than it is done on traditional analogue maps (geocentric view). Presented topics will refer to the fast-developing field of cartography, namely mobile cartography. Particular emphasis will be put on smartphone-type mobile devices and their applicability in the process of designing cartographic symbols. Celem artykułu było usystematyzowanie wiedzy na temat statycznych zmiennych wizualnych, które sa kluczowymi składnikami budujacymi sygnatury kartograficzne. Podjeto próbe zestawienia zmiennych wizualnych wyodrebnionych przez kartografów na przestrzeni ostatnich piecdziesieciu lat, zaczynajac od klasyfikacji przedstawionej przez J. Bertin’a. Dokonano analizy stopnia uzytecznosci poszczególnych zmiennych graficznych w aspekcie ich wykorzystania w projektowaniu znaków punktowych dla mobilnych aplikacji tworzonych w technologii Rzeczywistosci Rozszerzonej (Augmented Reality). Zmienne poddano analizie pod wzgledem czterech charakterystyk: selektywnosci, skojarzeniowosci, odzwierciedlenia ilosci oraz porzadku. W artykule zwrócono uwage na odmienne zastosowanie perspektywy pomiedzy tradycyjnymi analogowymi mapami (geocentrycznosc) a aplikacjami tworzonymi w technologii Rozszerzonej Rzeczywistosci (egocentrycznosc). Tresci prezentowane w pracy dotycza szybko rozwijajacej sie gałezi kartografii - kartografii mobilnej. Dodatkowy nacisk połozony został na próbe implementacji załozen projektowania punktowych znaków kartograficznych na urzadzenia mobilne typu smartphone.
Virtual Reality Visualization of Permafrost Dynamics Along a Transect Through Northern Alaska
NASA Astrophysics Data System (ADS)
Chappell, G. G.; Brody, B.; Webb, P.; Chord, J.; Romanovsky, V.; Tipenko, G.
2004-12-01
Understanding permafrost dynamics poses a significant challenge for researchers and planners. Our project uses nontraditional visualization tools to create a 3-D interactive virtual-reality environment in which permafrost dynamics can be explored and experimented with. We have incorporated a numerical soil temperature model by Gennadiy Tipenko and Vladimir Romanovsky of the Geophysical institute at the University of Alaska Fairbanks into an animated tour in space and time in the virtual reality facility of the Arctic Region Supercomputing Center at the University of Alaska Fairbanks. The software is being written by undergraduate interns Patrick Webb and Jordanna Chord under the direction of Professors Chappell and Brody. When using our software, the user appears to be surrounded by a 3-D computer-generated model of the state of Alaska. The eastern portion of the state is displaced upward from the western portion. The data are represented on an animated vertical strip running between the two parts, as if eastern Alaska were raised up, and the soil at the cut could be viewed. We use coloring to highlight significant properties and features of the soil: temperature, the active layer, etc. The user can view data from various parts of the state simply by walking to the appropriate location in the model, or by using a flying-style interface to cover longer distances. Using a control panel, the user can also alter the time, viewing the data for a particular date, or watching the data change with time: a high-speed movie in which long-term changes in permafrost are readily apparent. In the second phase of the project, we connect the visualization directly to the model, running in real time. We allow the user to manipulate the input data and get immediate visual feedback. For example, the user might specify the kind and placement of ground cover, by ``painting'' snowpack, plant species, or fire damage, and be able to see the effect on permafrost stability with no significant time lag.
Wang, Ching-Yi; Hwang, Wen-Juh; Fang, Jing-Jing; Sheu, Ching-Fan; Leong, Iat-Fai; Ma, Hui-Ing
2011-08-01
To compare the performance of reaching for stationary and moving targets in virtual reality (VR) and physical reality in persons with Parkinson's disease (PD). A repeated-measures design in which all participants reached in physical reality and VR under 5 conditions: 1 stationary ball condition and 4 conditions with the ball moving at different speeds. University research laboratory. Persons with idiopathic PD (n=29) and age-matched controls (n=25). Not applicable. Success rates and kinematics of arm movement (movement time, amplitude of peak velocity, and percentage of movement time for acceleration phase). In both VR and physical reality, the PD group had longer movement time (P<.001) and lower peak velocity (P<.001) than the controls when reaching for stationary balls. When moving targets were provided, the PD group improved more than the controls did in movement time (P<.001) and peak velocity (P<.001), and reached a performance level similar to that of the controls. Except for the fastest moving ball condition (0.5-s target viewing time), which elicited worse performance in VR than in physical reality, most cueing conditions in VR elicited performance generally similar to those in physical reality. Although slower than the controls when reaching for stationary balls, persons with PD increased movement speed in response to fast moving balls in both VR and physical reality. This suggests that with an appropriate choice of cueing speed, VR is a promising tool for providing visual motion stimuli to improve movement speed in persons with PD. More research on the long-term effect of this type of VR training program is needed. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
ERIC Educational Resources Information Center
Ackerly, Spafford C.
2001-01-01
Explains the vestibular organ's role in balancing the body and stabilizing the visual world using the example of a hunter. Describes the relationship between sensory perception and learning. Recommends using optical illusions to illustrate the distinctions between external realities and internal perceptions. (Contains 13 references.) (YDS)
Augmented Reality: A Brand New Challenge for the Assessment and Treatment of Psychological Disorders
Chicchi Giglioli, Irene Alice; Pedroli, Elisa
2015-01-01
Augmented Reality is a new technological system that allows introducing virtual contents in the real world in order to run in the same representation and, in real time, enhancing the user's sensory perception of reality. From another point of view, Augmented Reality can be defined as a set of techniques and tools that add information to the physical reality. To date, Augmented Reality has been used in many fields, such as medicine, entertainment, maintenance, architecture, education, and cognitive and motor rehabilitation but very few studies and applications of AR exist in clinical psychology. In the treatment of psychological disorders, Augmented Reality has given preliminary evidence to be a useful tool due to its adaptability to the patient needs and therapeutic purposes and interactivity. Another relevant factor is the quality of the user's experience in the Augmented Reality system determined from emotional engagement and sense of presence. This experience could increase the AR ecological validity in the treatment of psychological disorders. This paper reviews the recent studies on the use of Augmented Reality in the evaluation and treatment of psychological disorders, focusing on current uses of this technology and on the specific features that delineate Augmented Reality a new technique useful for psychology. PMID:26339283
Chicchi Giglioli, Irene Alice; Pallavicini, Federica; Pedroli, Elisa; Serino, Silvia; Riva, Giuseppe
2015-01-01
Augmented Reality is a new technological system that allows introducing virtual contents in the real world in order to run in the same representation and, in real time, enhancing the user's sensory perception of reality. From another point of view, Augmented Reality can be defined as a set of techniques and tools that add information to the physical reality. To date, Augmented Reality has been used in many fields, such as medicine, entertainment, maintenance, architecture, education, and cognitive and motor rehabilitation but very few studies and applications of AR exist in clinical psychology. In the treatment of psychological disorders, Augmented Reality has given preliminary evidence to be a useful tool due to its adaptability to the patient needs and therapeutic purposes and interactivity. Another relevant factor is the quality of the user's experience in the Augmented Reality system determined from emotional engagement and sense of presence. This experience could increase the AR ecological validity in the treatment of psychological disorders. This paper reviews the recent studies on the use of Augmented Reality in the evaluation and treatment of psychological disorders, focusing on current uses of this technology and on the specific features that delineate Augmented Reality a new technique useful for psychology.
NASA Astrophysics Data System (ADS)
Ribeiro, Allan; Santos, Helen
With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.
Reality-Based Learning: Outbreak, an Engaging Introductory Course in Public Health and Epidemiology
ERIC Educational Resources Information Center
Calonge, David Santandreu; Grando, Danilla
2013-01-01
Objective: To develop a totally online reality-based course that engages students and enables the development of enhanced teamwork and report-writing skills. Setting: Outbreaks of infectious diseases impacts upon commerce, trade and tourism as well as placing strains on healthcare systems. A general course introducing university students to…
Enhancing an Instructional Design Model for Virtual Reality-Based Learning
ERIC Educational Resources Information Center
Chen, Chwen Jen; Teh, Chee Siong
2013-01-01
In order to effectively utilize the capabilities of virtual reality (VR) in supporting the desired learning outcomes, careful consideration in the design of instruction for VR learning is crucial. In line with this concern, previous work proposed an instructional design model that prescribes instructional methods to guide the design of VR-based…
ERIC Educational Resources Information Center
Davis, Eric S.; Clark, Mary Ann
2012-01-01
In this qualitative study, eight school counselors participated in a series of reality play counseling trainings introducing techniques appropriate for counseling upper-grade elementary school students to enhance positive relationship building and problem solving skills. Participants were interviewed and their transcripts were analyzed using…
Exploring the Impact of Culture- and Language-Influenced Physics on Science Attitude Enhancement
ERIC Educational Resources Information Center
Morales, Marie Paz E.
2016-01-01
"Culture," a set of principles that trace and familiarize human beings within their existential realities, may provide an invisible lens through which reality could be discerned. Critically explored in this study is how culture- and language-sensitive curriculum materials in physics improve Pangasinan learners' attitude toward science.…
Using Augmented Reality Tools to Enhance Children's Library Services
ERIC Educational Resources Information Center
Meredith, Tamara R.
2015-01-01
Augmented reality (AR) has been used and documented for a variety of commercial and educational purposes, and the proliferation of mobile devices has increased the average person's access to AR systems and tools. However, little research has been done in the area of using AR to supplement traditional library services, specifically for patrons aged…
ARTutor--An Augmented Reality Platform for Interactive Distance Learning
ERIC Educational Resources Information Center
Lytridis, Chris; Tsinakos, Avgoustos; Kazanidis, Ioannis
2018-01-01
Augmented Reality (AR) has been used in various contexts in recent years in order to enhance user experiences in mobile and wearable devices. Various studies have shown the utility of AR, especially in the field of education, where it has been observed that learning results are improved. However, such applications require specialized teams of…