ERIC Educational Resources Information Center
Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.
2011-01-01
This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…
NASA Technical Reports Server (NTRS)
Terashima, Nobuyoshi
1994-01-01
In the future, remote images sent over communication lines will be reproduced in virtual reality (VR). This form of virtual telecommunications, which will allow observers to engage in an activity as though it were real, is the focus of considerable attention. Taken a step further, real and unreal objects will be placed in a single space to create an extremely realistic environment. Here, imaginary and other life forms as well as people and animals in remote locations will gather via telecommunication lines that create a common environment where life forms can work and interact together. Words, gestures, diagrams and other forms of communication will be used freely in performing work. Actual construction of a system based on this new concept will not only provide people with experiences that would have been impossible in the past, but will also inspire new applications in which people will function in environments where it would have been difficult if not impossible for them to function until now. This paper describes Tele Hyper Virtuality concept, its definition, applications, the key technologies to accomplish it and future prospects.
NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.
Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul
2014-09-30
As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Hyper-realistic face masks: a new challenge in person identification.
Sanders, Jet Gabrielle; Ueda, Yoshiyuki; Minemoto, Kazusa; Noyes, Eilidh; Yoshikawa, Sakiko; Jenkins, Rob
2017-01-01
We often identify people using face images. This is true in occupational settings such as passport control as well as in everyday social environments. Mapping between images and identities assumes that facial appearance is stable within certain bounds. For example, a person's apparent age, gender and ethnicity change slowly, if at all. It also assumes that deliberate changes beyond these bounds (i.e., disguises) would be easy to spot. Hyper-realistic face masks overturn these assumptions by allowing the wearer to look like an entirely different person. If unnoticed, these masks break the link between facial appearance and personal identity, with clear implications for applied face recognition. However, to date, no one has assessed the realism of these masks, or specified conditions under which they may be accepted as real faces. Herein, we examined incidental detection of unexpected but attended hyper-realistic masks in both photographic and live presentations. Experiment 1 (UK; n = 60) revealed no evidence for overt detection of hyper-realistic masks among real face photos, and little evidence of covert detection. Experiment 2 (Japan; n = 60) extended these findings to different masks, mask-wearers and participant pools. In Experiment 3 (UK and Japan; n = 407), passers-by failed to notice that a live confederate was wearing a hyper-realistic mask and showed limited evidence of covert detection, even at close viewing distance (5 vs. 20 m). Across all of these studies, viewers accepted hyper-realistic masks as real faces. Specific countermeasures will be required if detection rates are to be improved.
Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment
2015-06-09
Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4
Lorenz, D; Armbruster, W; Vogelgesang, C; Hoffmann, H; Pattar, A; Schmidt, D; Volk, T; Kubulus, D
2016-09-01
Chief emergency physicians are regarded as an important element in the care of the injured and sick following mass casualty accidents. Their education is very theoretical; practical content in contrast often falls short. Limitations are usually the very high costs of realistic (large-scale) exercises, poor reproducibility of the scenarios, and poor corresponding results. To substantially improve the educational level because of the complexity of mass casualty accidents, modified training concepts are required that teach the not only the theoretical but above all the practical skills considerably more intensively than at present. Modern training concepts should make it possible for the learner to realistically simulate decision processes. This article examines how interactive virtual environments are applicable for the education of emergency personnel and how they could be designed. Virtual simulation and training environments offer the possibility of simulating complex situations in an adequately realistic manner. The so-called virtual reality (VR) used in this context is an interface technology that enables free interaction in addition to a stereoscopic and spatial representation of virtual large-scale emergencies in a virtual environment. Variables in scenarios such as the weather, the number wounded, and the availability of resources, can be changed at any time. The trainees are able to practice the procedures in many virtual accident scenes and act them out repeatedly, thereby testing the different variants. With the aid of the "InSitu" project, it is possible to train in a virtual reality with realistically reproduced accident situations. These integrated, interactive training environments can depict very complex situations on a scale of 1:1. Because of the highly developed interactivity, the trainees can feel as if they are a direct part of the accident scene and therefore identify much more with the virtual world than is possible with desktop systems. Interactive, identifiable, and realistic training environments based on projector systems could in future enable a repetitive exercise with changes within a decision tree, in reproducibility, and within different occupational groups. With a hard- and software environment numerous accident situations can be depicted and practiced. The main expense is the creation of the virtual accident scenes. As the appropriate city models and other three-dimensional geographical data are already available, this expenditure is very low compared with the planning costs of a large-scale exercise.
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
Virtual Education: Guidelines for Using Games Technology
ERIC Educational Resources Information Center
Schofield, Damian
2014-01-01
Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online virtual environments. This technology has been used to generate a range of interactive Virtual Reality (VR) learning environments across a spectrum of…
A Virtual Education: Guidelines for Using Games Technology
ERIC Educational Resources Information Center
Schofield, Damian
2014-01-01
Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online vir-tual environments. This technology has been used to generate a range of interactive Virtual Real-ity (VR) learning environments across a spectrum of…
Distributed virtual environment for emergency medical training
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.
1997-07-01
In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.
Virtual Reality Simulation Training for Ebola Deployment.
Ragazzoni, Luca; Ingrassia, Pier Luigi; Echeverri, Lina; Maccapani, Fabio; Berryman, Lizzy; Burkle, Frederick M; Della Corte, Francesco
2015-10-01
Both virtual and hybrid simulation training offer a realistic and effective educational framework and opportunity to provide virtual exposure to operational public health skills that are essential for infection control and Ebola treatment management. This training is designed to increase staff safety and create a safe and realistic environment where trainees can gain essential basic and advanced skills.
Using voice input and audio feedback to enhance the reality of a virtual experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miner, N.E.
1994-04-01
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less
ERIC Educational Resources Information Center
Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.
2013-01-01
This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…
The Evolution of Constructivist Learning Environments: Immersion in Distributed, Virtual Worlds.
ERIC Educational Resources Information Center
Dede, Chris
1995-01-01
Discusses the evolution of constructivist learning environments and examines the collaboration of simulated software models, virtual environments, and evolving mental models via immersion in artificial realities. A sidebar gives a realistic example of a student navigating through cyberspace. (JMV)
Virtual Reality as Innovative Approach to the Interior Designing
NASA Astrophysics Data System (ADS)
Kaleja, Pavol; Kozlovská, Mária
2017-06-01
We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies
Photorealistic virtual anatomy based on Chinese Visible Human data.
Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y
2006-04-01
Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
Intelligent Motion and Interaction Within Virtual Environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)
2007-01-01
What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.
McCorkle, Doug
2017-12-27
Ames Laboratory scientist Doug McCorkle explains osgBullet, a 3-D virtual simulation software, and how it helps engineers design complex products and systems in a realistic, real-time virtual environment.
EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT
Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...
Using a Virtual Store As a Research Tool to Investigate Consumer In-store Behavior.
Ploydanai, Kunalai; van den Puttelaar, Jos; van Herpen, Erica; van Trijp, Hans
2017-07-24
People's responses to products and/or choice environments are crucial to understanding in-store consumer behaviors. Currently, there are various approaches (e.g., surveys or laboratory settings) to study in-store behaviors, but the external validity of these is limited by their poor capability to resemble realistic choice environments. In addition, building a real store to meet experimental conditions while controlling for undesirable effects is costly and highly difficult. A virtual store developed by virtual reality techniques potentially transcends these limitations by offering the simulation of a 3D virtual store environment in a realistic, flexible, and cost-efficient way. In particular, a virtual store interactively allows consumers (participants) to experience and interact with objects in a tightly controlled yet realistic setting. This paper presents the key elements of using a desktop virtual store to study in-store consumer behavior. Descriptions of the protocol steps to: 1) build the experimental store, 2) prepare the data management program, 3) run the virtual store experiment, and 4) organize and export data from the data management program are presented. The virtual store enables participants to navigate through the store, choose a product from alternatives, and select or return products. Moreover, consumer-related shopping behaviors (e.g., shopping time, walking speed, and number and type of products examined and bought) can also be collected. The protocol is illustrated with an example of a store layout experiment showing that shelf length and shelf orientation influence shopping- and movement-related behaviors. This demonstrates that the use of a virtual store facilitates the study of consumer responses. The virtual store can be especially helpful when examining factors that are costly or difficult to change in real life (e.g., overall store layout), products that are not presently available in the market, and routinized behaviors in familiar environments.
Environments for online maritime simulators with cloud computing capabilities
NASA Astrophysics Data System (ADS)
Raicu, Gabriel; Raicu, Alexandra
2016-12-01
This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning
ERIC Educational Resources Information Center
Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar
2017-01-01
Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…
Zibrek, Katja; Kokkinara, Elena; Mcdonnell, Rachel
2018-04-01
Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.
The benefits of virtual reality simulator training for laparoscopic surgery.
Hart, Roger; Karthigasu, Krishnan
2007-08-01
Virtual reality is a computer-generated system that provides a representation of an environment. This review will analyse the literature with regard to any benefit to be derived from training with virtual reality equipment and to describe the current equipment available. Virtual reality systems are not currently realistic of the live operating environment because they lack tactile sensation, and do not represent a complete operation. The literature suggests that virtual reality training is a valuable learning tool for gynaecologists in training, particularly those in the early stages of their careers. Furthermore, it may be of benefit for the ongoing audit of surgical skills and for the early identification of a surgeon's deficiencies before operative incidents occur. It is only a matter of time before realistic virtual reality models of most complete gynaecological operations are available, with improved haptics as a result of improved computer technology. It is inevitable that in the modern climate of litigation virtual reality training will become an essential part of clinical training, as evidence for its effectiveness as a training tool exists, and in many countries training by operating on live animals is not possible.
Using smartphone technology to deliver a virtual pedestrian environment: usability and validation.
Schwebel, David C; Severson, Joan; He, Yefei
2017-09-01
Various programs effectively teach children to cross streets more safely, but all are labor- and cost-intensive. Recent developments in mobile phone technology offer opportunity to deliver virtual reality pedestrian environments to mobile smartphone platforms. Such an environment may offer a cost- and labor-effective strategy to teach children to cross streets safely. This study evaluated usability, feasibility, and validity of a smartphone-based virtual pedestrian environment. A total of 68 adults completed 12 virtual crossings within each of two virtual pedestrian environments, one delivered by smartphone and the other a semi-immersive kiosk virtual environment. Participants completed self-report measures of perceived realism and simulator sickness experienced in each virtual environment, plus self-reported demographic and personality characteristics. All participants followed system instructions and used the smartphone-based virtual environment without difficulty. No significant simulator sickness was reported or observed. Users rated the smartphone virtual environment as highly realistic. Convergent validity was detected, with many aspects of pedestrian behavior in the smartphone-based virtual environment matching behavior in the kiosk virtual environment. Anticipated correlations between personality and kiosk virtual reality pedestrian behavior emerged for the smartphone-based system. A smartphone-based virtual environment can be usable and valid. Future research should develop and evaluate such a training system.
SIMPAVE : evaluation of virtual environments for pavement construction simulations
DOT National Transportation Integrated Search
2007-05-01
In the last couple of years, the authors have been developing virtual simulations for modeling the construction of asphalt pavements. The simulations are graphically rich, interactive, three-dimensional, with realistic physics, and allow multiple peo...
Development of a virtual speaking simulator using Image Based Rendering.
Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I
2002-01-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.
Avatars Go to Class: A Virtual Environment Soil Science Activity
ERIC Educational Resources Information Center
Mamo, M.; Namuth-Covert, D.; Guru, A.; Nugent, G.; Phillips, L.; Sandall, L.; Kettler, T.; McCallister, D.
2011-01-01
Web 2.0 technology is expanding rapidly from social and gaming uses into the educational applications. Specifically, the multi-user virtual environment (MUVE), such as SecondLife, allows educators to fill the gap of first-hand experience by creating simulated realistic evolving problems/games. In a pilot study, a team of educators at the…
The Optokinetic Cervical Reflex (OKCR) in Pilots of High-Performance Aircraft.
1997-04-01
Coupled System virtual reality - the attempt to create a realistic, three-dimensional environment or synthetic immersive environment in which the user ...factors interface between the pilot and the flight environment. The final section is a case study of head- and helmet-mounted displays (HMD) and the impact...themselves as actually moving (flying) through a virtual environment. However, in the studies of Held, et al. (1975) and Young, et al. (1975) the
The Potential for Scientific Collaboration in Virtual Ecosystems
ERIC Educational Resources Information Center
Magerko, Brian
2010-01-01
This article explores the potential benefits of creating "virtual ecosystems" from real-world data. These ecosystems are intended to be realistic virtual representations of environments that may be costly or difficult to access in person. They can be constructed as 3D worlds rendered from stereo video data, augmented with scientific data, and then…
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
ERIC Educational Resources Information Center
Thies, Anna-Lena; Weissenstein, Anne; Haulsen, Ivo; Marschall, Bernhard; Friederichs, Hendrik
2014-01-01
Simulation as a tool for medical education has gained considerable importance in the past years. Various studies have shown that the mastering of basic skills happens best if taught in a realistic and workplace-based context. It is necessary that simulation itself takes place in the realistic background of a genuine clinical or in an accordingly…
Nunnerley, Joanne; Gupta, Swati; Snell, Deborah; King, Marcus
2017-05-01
A user-centred design was used to develop and test the feasibility of an immersive 3D virtual reality wheelchair training tool for people with spinal cord injury (SCI). A Wheelchair Training System was designed and modelled using the Oculus Rift headset and a Dynamic Control wheelchair joystick. The system was tested by clinicians and expert wheelchair users with SCI. Data from focus groups and individual interviews were analysed using a general inductive approach to thematic analysis. Four themes emerged: Realistic System, which described the advantages of a realistic virtual environment; a Wheelchair Training System, which described participants' thoughts on the wheelchair training applications; Overcoming Resistance to Technology, the obstacles to introducing technology within the clinical setting; and Working outside the Rehabilitation Bubble which described the protective hospital environment. The Oculus Rift Wheelchair Training System has the potential to provide a virtual rehabilitation setting which could allow wheelchair users to learn valuable community wheelchair use in a safe environment. Nausea appears to be a side effect of the system, which will need to be resolved before this can be a viable clinical tool. Implications for Rehabilitation Immersive virtual reality shows promising benefit for wheelchair training in a rehabilitation setting. Early engagement with consumers can improve product development.
Height effects in real and virtual environments.
Simeonov, Peter I; Hsiao, Hongwei; Dotson, Brian W; Ammons, Douglas E
2005-01-01
The study compared human perceptions of height, danger, and anxiety, as well as skin conductance and heart rate responses and postural instability effects, in real and virtual height environments. The 24 participants (12 men, 12 women), whose average age was 23.6 years, performed "lean-over-the-railing" and standing tasks on real and comparable virtual balconies, using a surround-screen virtual reality (SSVR) system. The results indicate that the virtual display of elevation provided realistic perceptual experience and induced some physiological responses and postural instability effects comparable to those found in a real environment. It appears that a simulation of elevated work environment in a SSVR system, although with reduced visual fidelity, is a valid tool for safety research. Potential applications of this study include the design of virtual environments that will help in safe evaluation of human performance at elevation, identification of risk factors leading to fall incidents, and assessment of new fall prevention strategies.
Fire training in a virtual-reality environment
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Jurgen; Bucken, Arno
2005-03-01
Although fire is very common in our daily environment - as a source of energy at home or as a tool in industry - most people cannot estimate the danger of a conflagration. Therefore it is important to train people in combating fire. Beneath training with propane simulators or real fires and real extinguishers, fire training can be performed in virtual reality, which means a pollution-free and fast way of training. In this paper we describe how to enhance a virtual-reality environment with a real-time fire simulation and visualisation in order to establish a realistic emergency-training system. The presented approach supports extinguishing of the virtual fire including recordable performance data as needed in teletraining environments. We will show how to get realistic impressions of fire using advanced particle-simulation and how to use the advantages of particles to trigger states in a modified cellular automata used for the simulation of fire-behaviour. Using particle systems that interact with cellular automata it is possible to simulate a developing, spreading fire and its reaction on different extinguishing agents like water, CO2 or oxygen. The methods proposed in this paper have been implemented and successfully tested on Cosimir, a commercial robot-and VR-simulation-system.
Enhance Learning on Software Project Management through a Role-Play Game in a Virtual World
ERIC Educational Resources Information Center
Maratou, Vicky; Chatzidaki, Eleni; Xenos, Michalis
2016-01-01
This article presents a role-play game for software project management (SPM) in a three-dimensional online multiuser virtual world. The Opensimulator platform is used for the creation of an immersive virtual environment that facilitates students' collaboration and realistic interaction, in order to manage unexpected events occurring during the…
Collaboration and Dialogue in Virtual Reality
ERIC Educational Resources Information Center
Jensen, Camilla Gyldendahl
2017-01-01
"Virtual reality" adds a new dimension to problem-based learning (PBL) environments in the architecture and building construction educations, where a realistic and lifelike presence in a building enables students to assess and discuss how the various solutions interact with each other. Combined with "Building Information…
Poeschl, Sandra; Doering, Nicola
2012-01-01
Virtual Reality technology offers great possibilities for Cognitive Behavioral Therapy of fear of public speaking: Clients can be exposed to virtual fear-triggering stimuli (exposure) and are able to role-play in virtual environments, training social skills to overcome their fear. Usually, prototypical audience behavior (neutral, social and anti-social) serves as stimulus in virtual training sessions, although there is significant lack of theoretical basis on typical audience behavior. The study presented deals with the design of a realistic virtual presentation scenario. An audience (consisting of n=18 men and women) in an undergraduate seminar was observed during three frontal lecture sessions. Behavior frequency of four nonverbal dimensions (eye contact, facial expression, gesture, and posture) was rated by means of a quantitative content analysis. Results show audience behavior patterns which seem to be typical in frontal lecture contexts, like friendly and neutral face expressions. Additionally, combined and even synchronized behavioral patterns between participants who sit next to each other (like turning to the neighbor and start talking) were registered. The gathered data serve as empirical design basis for a virtual audience to be used in virtual training applications that stimulate the experiences of the participants in a realistic manner, thereby improving the experienced presence in the training application.
The Effects of Virtual Weather on Presence
NASA Astrophysics Data System (ADS)
Wissmath, Bartholomäus; Weibel, David; Mast, Fred W.
In modern societies people tend to spend more time in front of computer screens than outdoors. Along with an increasing degree of realism displayed in digital environments, simulated weather appears more and more realistic and more often implemented in digital environments. Research has found that the actual weather influences behavior and mood. In this paper we experimentally examine the effects of virtual weather on the sense of presence. Thereby we found individuals (N=30) to immerse deeper in digital environments displaying fair weather conditions than in environments displaying bad weather. We also investigate whether virtual weather can influence behavior. The possible implications of theses findings for presence theory as well as digital environment designers will be discussed.
Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I
2002-06-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.
Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments
Slater, Mel
2009-01-01
In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality. PMID:19884149
Investigations of a Complex, Realistic Task: Intentional, Unsystematic, and Exhaustive Experimenters
ERIC Educational Resources Information Center
McElhaney, Kevin W.; Linn, Marcia C.
2011-01-01
This study examines how students' experimentation with a virtual environment contributes to their understanding of a complex, realistic inquiry problem. We designed a week-long, technology-enhanced inquiry unit on car collisions. The unit uses new technologies to log students' experimentation choices. Physics students (n = 148) in six diverse high…
Immersive Learning Technologies: Realism and Online Authentic Learning
ERIC Educational Resources Information Center
Herrington, Jan; Reeves, Thomas C.; Oliver, Ron
2007-01-01
The development of immersive learning technologies in the form of virtual reality and advanced computer applications has meant that realistic creations of simulated environments are now possible. Such simulations have been used to great effect in training in the military, air force, and in medical training. But how realistic do problems need to be…
Report to Congress on Sustainable Ranges, 2011
2011-07-01
Interoperation of live participants and their operational systems. `` Realistic LVC representations of non-participant friendly warfighting capabilities...across the full range of military operations (ROMO). `` Realistic LVC representations of opposing forces (OPFOR), neutral, and factional entities that...entities. `` Suitable representations of the real world environment where the warfighting capabilities exist. Table 2-2 Live, Virtual, and
Remote counseling using HyperMirror quasi space-sharing system
NASA Astrophysics Data System (ADS)
Hashimoto, Sayuri; Morikawa, Osamu; Hashimoto, Nobuyuki; Maesako, Takanori
2008-08-01
In the modern information society, networks are getting faster, costs are getting lower, and displays are getting clearer. Today, just about anyone can easily use precise, dynamic, image distribution systems in their everyday life. Now, the question is how to give the benefits of network systems to the local community, as well as to each individual.This study was designed to use communication with realistic sensations to examine the effectiveness of remote individual counseling intervention in reducing depression, anxiety and stress in child-rearing mothers. Three child-rearing mothers residing in the city of Osaka each received one session of remote counseling intervention. The results showed an alleviation of stress related to child-rearing, i.e., the reduction in state anxiety, depression and subjective stress related to child-rearing. Moreover, an experimental demonstration employed a HyperMirror system capable of presenting visual and auditory images similar to reality, in order to provide the counselees with realistic sensations. While the voice communication environment was poor, the remote counseling allowed for the communication of sensory information, i.e., skinship that communicated information related to assurance/peace of mind, and auditory information, i.e., a whispering voices in which signals of affection were transmitted; the realistic sensation contributed to a reduction in stress levels. The positive effects of the intervention were confirmed through a pre and post intervention study. The results suggested the need to conduct future studies to confirm the mid- and long-term improvements caused by the intervention, as well as the need to improve the voice transmission environment.
Jang, Dong P; Ku, Jeong H; Choi, Young H; Wiederhold, Brenda K; Nam, San W; Kim, In Y; Kim, Sun I
2002-09-01
Virtual reality therapy (VRT), based on this sophisticated technology, has been recently used in the treatment of subjects diagnosed with acrophobia, a disorder that is characterized by marked anxiety upon exposure to heights and avoidance of heights. Conventional VR systems for the treatment of acrophobia have limitations, over-costly devices or somewhat unrealistic graphic scenes. The goal of this study was to develop an inexpensive and more realistic virtual environment (VE) in which to perform exposure therapy for acrophobia. It is based on a personal computer, and a virtual scene of a bunge-jump tower in the middle of a large city. The virtual scenario includes an open lift surrounded by props beside a tower, which allows the patient to feel a sense of heights. The effectiveness of the VE was evaluated through the clinical treatment of a subject who was suffering from the fear of heights. As a result, it was proved that this VR environment was effective and realistic at overcoming acrophobia according not only to the comparison results of a variety of questionnaires before and after treatment but also to the subject's comments that the VE seemed to evoke more fearful feelings than the real situation.
InSight: An innovative multimedia training tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seidel, B.R.; Crites, D.C.; Forsmann, J.H.
1996-05-01
InSight is an innovative computer-based multimedia training tool that provides a navigable virtual environment and links to related information. It provides training and guidance for touring and observing operations at any facility or site in a realistic virtual environment. This presentation identifies unique attributes of InSight and describes the initial application at ANL-West. A brief description of the development of this tool, production steps, and an onscreen demonstration of its operation are also provided.
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
A Low-cost System for Generating Near-realistic Virtual Actors
NASA Astrophysics Data System (ADS)
Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.
2015-06-01
Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.
Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T
2014-01-01
Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.
Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.
2014-01-01
Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964
Virtual operating room for team training in surgery.
Abelson, Jonathan S; Silverman, Elliott; Banfelder, Jason; Naides, Alexandra; Costa, Ricardo; Dakin, Gregory
2015-09-01
We proposed to develop a novel virtual reality (VR) team training system. The objective of this study was to determine the feasibility of creating a VR operating room to simulate a surgical crisis scenario and evaluate the simulator for construct and face validity. We modified ICE STORM (Integrated Clinical Environment; Systems, Training, Operations, Research, Methods), a VR-based system capable of modeling a variety of health care personnel and environments. ICE STORM was used to simulate a standardized surgical crisis scenario, whereby participants needed to correct 4 elements responsible for loss of laparoscopic visualization. The construct and face validity of the environment were measured. Thirty-three participants completed the VR simulation. Attendings completed the simulation in less time than trainees (271 vs 201 seconds, P = .032). Participants felt the training environment was realistic and had a favorable impression of the simulation. All participants felt the workload of the simulation was low. Creation of a VR-based operating room for team training in surgery is feasible and can afford a realistic team training environment. Copyright © 2015 Elsevier Inc. All rights reserved.
Virtual humans and formative assessment to train diagnostic skills in bulimia nervosa.
Gutiérrez-Maldonado, José; Ferrer-Garcia, Marta; Pla, Joana; Andrés-Pueyo, Antonio
2014-01-01
Carrying out a diagnostic interview requires skills that need to be taught in a controlled environment. Virtual Reality (VR) environments are increasingly used in the training of professionals, as they offer the most realistic alternative while not requiring students to face situations for which they are yet unprepared. The results of the training of diagnostic skills can also be generalized to any other situation in which effective communication skills play a major role. Our aim with this study has been to develop a procedure of formative assessment in order to increment the effectiveness of virtual learning simulation systems and then to assess their efficacy.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
NASA Astrophysics Data System (ADS)
Demir, I.
2015-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.
Virtual reality training improves students' knowledge structures of medical concepts.
Stevens, Susan M; Goldsmith, Timothy E; Summers, Kenneth L; Sherstyuk, Andrei; Kihmm, Kathleen; Holten, James R; Davis, Christopher; Speitel, Daniel; Maris, Christina; Stewart, Randall; Wilks, David; Saland, Linda; Wax, Diane; Panaiotis; Saiki, Stanley; Alverson, Dale; Caudell, Thomas P
2005-01-01
Virtual environments can provide training that is difficult to achieve under normal circumstances. Medical students can work on high-risk cases in a realistic, time-critical environment, where students practice skills in a cognitively demanding and emotionally compelling situation. Research from cognitive science has shown that as students acquire domain expertise, their semantic organization of core domain concepts become more similar to those of an expert's. In the current study, we hypothesized that students' knowledge structures would become more expert-like as a result of their diagnosing and treating a patient experiencing a hematoma within a virtual environment. Forty-eight medical students diagnosed and treated a hematoma case within a fully immersed virtual environment. Student's semantic organization of 25 case-related concepts was assessed prior to and after training. Students' knowledge structures became more integrated and similar to an expert knowledge structure of the concepts as a result of the learning experience. The methods used here for eliciting, representing, and evaluating knowledge structures offer a sensitive and objective means for evaluating student learning in virtual environments and medical simulations.
Role of virtual reality for cerebral palsy management.
Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy
2014-08-01
Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.
The scientific research potential of virtual worlds.
Bainbridge, William Sims
2007-07-27
Online virtual worlds, electronic environments where people can work and interact in a somewhat realistic manner, have great potential as sites for research in the social, behavioral, and economic sciences, as well as in human-centered computer science. This article uses Second Life and World of Warcraft as two very different examples of current virtual worlds that foreshadow future developments, introducing a number of research methodologies that scientists are now exploring, including formal experimentation, observational ethnography, and quantitative analysis of economic markets or social networks.
Digital evaluation of sitting posture comfort in human-vehicle system under Industry 4.0 framework
NASA Astrophysics Data System (ADS)
Tao, Qing; Kang, Jinsheng; Sun, Wenlei; Li, Zhaobo; Huo, Xiao
2016-09-01
Most of the previous studies on the vibration ride comfort of the human-vehicle system were focused only on one or two aspects of the investigation. A hybrid approach which integrates all kinds of investigation methods in real environment and virtual environment is described. The real experimental environment includes the WBV(whole body vibration) test, questionnaires for human subjective sensation and motion capture. The virtual experimental environment includes the theoretical calculation on simplified 5-DOF human body vibration model, the vibration simulation and analysis within ADAMS/VibrationTM module, and the digital human biomechanics and occupational health analysis in Jack software. While the real experimental environment provides realistic and accurate test results, it also serves as core and validation for the virtual experimental environment. The virtual experimental environment takes full advantages of current available vibration simulation and digital human modelling software, and makes it possible to evaluate the sitting posture comfort in a human-vehicle system with various human anthropometric parameters. How this digital evaluation system for car seat comfort design is fitted in the Industry 4.0 framework is also proposed.
The HyperLeda project en route to the astronomical virtual observatory
NASA Astrophysics Data System (ADS)
Golev, V.; Georgiev, V.; Prugniel, Ph.
2002-07-01
HyperLeda (Hyper-Linked Extragalactic Databases and Archives) is aimed to study the evolution of galaxies, their kinematics and stellar populations and the structure of Local Universe. HyperLeda is involved in catalogue and software production, data-mining and massive data processing. The products are serviced to the community through web mirrors. The development of HyperLeda is distributed between different sites and is based on the background experience of the LEDA and Hypercat databases. The HyperLeda project is focused both on the European iAstro colaboration and as a unique database for studies of the physics of the extragalactic objects.
Business Japanese, a HyperCard Simulation.
ERIC Educational Resources Information Center
Saito-Abbott, Yoshiko; Abbott, Thomas
This paper describes Business Japanese (BJ), a HyperCard based tutorial designed as courseware for use in a third-year Japanese course at the University of Texas, Austin (UTA). A major objective was to develop good courseware based on proven language learning theory that would integrate theory, practice, and technology. BJ stresses a realistic and…
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
ERIC Educational Resources Information Center
Franetovic, Marija
2012-01-01
Current educational initiatives encourage the use of authentic learning environments to realistically prepare students for jobs in a constantly changing world. Many students of the Millennial generation may be social media savvy. However, what can be said about learning conditions and student readiness for active, reflective and collaborative…
ERIC Educational Resources Information Center
Alvarez, Nahum; Sanchez-Ruiz, Antonio; Cavazza, Marc; Shigematsu, Mika; Prendinger, Helmut
2015-01-01
The use of three-dimensional virtual environments in training applications supports the simulation of complex scenarios and realistic object behaviour. While these environments have the potential to provide an advanced training experience to students, it is difficult to design and manage a training session in real time due to the number of…
Development of virtual environment for treating acrophobia.
Ku, J; Jang, D; Shin, M; Jo, H; Ahn, H; Lee, J; Cho, B; Kim, S I
2001-01-01
Virtual Reality (VR) is a new technology that makes humans communicate with computer. It allows the user to see, hear, feel and interact in a three-dimensional virtual world created graphically. Virtual Reality Therapy (VRT), based on this sophisticated technology, has been recently used in the treatment of subjects diagnosed with acrophobia, a disorder that is characterized by marked anxiety upon exposure to heights, avoidance of heights, and a resulting interference in functioning. Conventional virtual reality system for the treatment of acrophobia has a limitation in scope that it is based on over-costly devices or somewhat unrealistic graphic scene. The goal of this study was to develop a inexpensive and more realistic virtual environment for the exposure therapy of acrophobia. We constructed two types virtual environment. One is constituted a bungee-jump tower in the middle of a city. It includes the open lift surrounded by props beside tower that allowed the patient to feel sense of heights. Another is composed of diving boards which have various heights. It provides a view of a lower diving board and people swimming in the pool to serve the patient stimuli upon exposure to heights.
3D virtual environment of Taman Mini Indonesia Indah in a web
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.
2018-05-01
Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.
Gokeler, Alli; Bisschop, Marsha; Myer, Gregory D; Benjaminse, Anne; Dijkstra, Pieter U; van Keeken, Helco G; van Raay, Jos J A M; Burgerhof, Johannes G M; Otten, Egbert
2016-07-01
The purpose of this study was to evaluate the influence of immersion in a virtual reality environment on knee biomechanics in patients after ACL reconstruction (ACLR). It was hypothesized that virtual reality techniques aimed to change attentional focus would influence altered knee flexion angle, knee extension moment and peak vertical ground reaction force (vGRF) in patients following ACLR. Twenty athletes following ACLR and 20 healthy controls (CTRL) performed a step-down task in both a non-virtual reality environment and a virtual reality environment displaying a pedestrian traffic scene. A motion analysis system and force plates were used to measure kinematics and kinetics during a step-down task to analyse each single-leg landing. A significant main effect was found for environment for knee flexion excursion (P = n.s.). Significant interaction differences were found between environment and groups for vGRF (P = 0.004), knee moment (P < 0.001), knee angle at peak vGRF (P = 0.01) and knee flexion excursion (P = 0.03). There was larger effect of virtual reality environment on knee biomechanics in patients after ACLR compared with controls. Patients after ACLR immersed in virtual reality environment demonstrated knee joint biomechanics that approximate those of CTRL. The results of this study indicate that a realistic virtual reality scenario may distract patients after ACLR from conscious motor control. Application of clinically available technology may aid in current rehabilitation programmes to target altered movement patterns after ACLR. Diagnostic study, Level III.
Thundercloud: Domain specific information security training for the smart grid
NASA Astrophysics Data System (ADS)
Stites, Joseph
In this paper, we describe a cloud-based virtual smart grid test bed: ThunderCloud, which is intended to be used for domain-specific security training applicable to the smart grid environment. The test bed consists of virtual machines connected using a virtual internal network. ThunderCloud is remotely accessible, allowing students to undergo educational exercises online. We also describe a series of practical exercises that we have developed for providing the domain-specific training using ThunderCloud. The training exercises and attacks are designed to be realistic and to reflect known vulnerabilities and attacks reported in the smart grid environment. We were able to use ThunderCloud to offer practical domain-specific security training for smart grid environment to computer science students at little or no cost to the department and no risk to any real networks or systems.
LVC interaction within a mixed-reality training system
NASA Astrophysics Data System (ADS)
Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio
2012-03-01
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.
2014-01-01
Virtual worlds (VWs), in which participants navigate as avatars through three-dimensional, computer-generated, realistic-looking environments, are emerging as important new technologies for distance health education. However, there is relatively little documented experience using VWs for international healthcare training. The Geneva Foundation for Medical Education and Research (GFMER) conducted a VW training for healthcare professionals enrolled in a GFMER training course. This paper describes the development, delivery, and results of a pilot project undertaken to explore the potential of VWs as an environment for distance healthcare education for an international audience that has generally limited access to conventionally delivered education. PMID:24555833
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Designing 3 Dimensional Virtual Reality Using Panoramic Image
NASA Astrophysics Data System (ADS)
Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna
The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.
Characteristic analysis and simulation for polysilicon comb micro-accelerometer
NASA Astrophysics Data System (ADS)
Liu, Fengli; Hao, Yongping
2008-10-01
High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.
A High-Fidelity Virtual Environment for the Study of Paranoia
Broome, Matthew R.; Zányi, Eva; Selmanovic, Elmedin; Czanner, Silvester; Birchwood, Max; Chalmers, Alan; Singh, Swaran P.
2013-01-01
Psychotic disorders carry social and economic costs for sufferers and society. Recent evidence highlights the risk posed by urban upbringing and social deprivation in the genesis of paranoia and psychosis. Evidence based psychological interventions are often not offered because of a lack of therapists. Virtual reality (VR) environments have been used to treat mental health problems. VR may be a way of understanding the aetiological processes in psychosis and increasing psychotherapeutic resources for its treatment. We developed a high-fidelity virtual reality scenario of an urban street scene to test the hypothesis that virtual urban exposure is able to generate paranoia to a comparable or greater extent than scenarios using indoor scenes. Participants (n = 32) entered the VR scenario for four minutes, after which time their degree of paranoid ideation was assessed. We demonstrated that the virtual reality scenario was able to elicit paranoia in a nonclinical, healthy group and that an urban scene was more likely to lead to higher levels of paranoia than a virtual indoor environment. We suggest that this study offers evidence to support the role of exposure to factors in the urban environment in the genesis and maintenance of psychotic experiences and symptoms. The realistic high-fidelity street scene scenario may offer a useful tool for therapists. PMID:24455255
Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel
2015-01-01
The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414
Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel
2015-01-01
The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants' behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants' height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother).
A high-fidelity virtual environment for the study of paranoia.
Broome, Matthew R; Zányi, Eva; Hamborg, Thomas; Selmanovic, Elmedin; Czanner, Silvester; Birchwood, Max; Chalmers, Alan; Singh, Swaran P
2013-01-01
Psychotic disorders carry social and economic costs for sufferers and society. Recent evidence highlights the risk posed by urban upbringing and social deprivation in the genesis of paranoia and psychosis. Evidence based psychological interventions are often not offered because of a lack of therapists. Virtual reality (VR) environments have been used to treat mental health problems. VR may be a way of understanding the aetiological processes in psychosis and increasing psychotherapeutic resources for its treatment. We developed a high-fidelity virtual reality scenario of an urban street scene to test the hypothesis that virtual urban exposure is able to generate paranoia to a comparable or greater extent than scenarios using indoor scenes. Participants (n = 32) entered the VR scenario for four minutes, after which time their degree of paranoid ideation was assessed. We demonstrated that the virtual reality scenario was able to elicit paranoia in a nonclinical, healthy group and that an urban scene was more likely to lead to higher levels of paranoia than a virtual indoor environment. We suggest that this study offers evidence to support the role of exposure to factors in the urban environment in the genesis and maintenance of psychotic experiences and symptoms. The realistic high-fidelity street scene scenario may offer a useful tool for therapists.
A usability assessment on a virtual reality system for panic disorder treatment
NASA Astrophysics Data System (ADS)
Lee, Jaelin; Kawai, Takashi; Yoshida, Nahoko; Izawa, Shuhei; Nomura, Shinobu; Eames, Douglas; Kaiya, Hisanobu
2008-02-01
The authors have developed a virtual reality exposure system that reflects the Japanese culture and environment. Concretely, the system focuses on the subway environment, which is the environment most patients receiving treatment for panic disorder at hospitals in Tokyo, Japan tend to avoid. The system is PC based and features realistic video images and highly interactive functionality. In particular, the system enables instant transformation of the virtual space and allows situations to be freely customized according to the condition and symptoms expressed by each patient. Positive results achieved in therapy assessments aimed at patients with panic disorder accompanying agoraphobia indicate the possibility of indoor treatment. Full utilization of the functionality available requires that the interactive functions be easily operable. Accordingly, there appears to be a need for usability testing aimed at determining whether or not a therapist can operate the system naturally while focusing fully on treatment. In this paper, the configuration of the virtual reality exposure system focusing on the subway environment is outlined. Further, the results of usability tests aimed at assessing how naturally it can be operated while focusing fully on treatment are described.
Shadow Mode Assessment Using Realistic Technologies for the National Airspace (SMART NAS)
NASA Technical Reports Server (NTRS)
Kopardekar, Parimal H.
2014-01-01
Develop a simulation and modeling capability that includes: (a) Assessment of multiple parallel universes, (b) Accepts data feeds, (c) Allows for live virtual constructive distribute environment, (d) Enables integrated examinations of concepts, algorithms, technologies and National Airspace System (NAS) architectures.
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1993-01-01
In its search for higher level computer interfaces and more realistic electronic simulations for measurement and spatial analysis in human factors design, NASA at MSFC is evaluating the functionality of virtual reality (VR) technology. Virtual reality simulation generates a three dimensional environment in which the participant appears to be enveloped. It is a type of interactive simulation in which humans are not only involved, but included. Virtual reality technology is still in the experimental phase, but it appears to be the next logical step after computer aided three-dimensional animation in transferring the viewer from a passive to an active role in experiencing and evaluating an environment. There is great potential for using this new technology when designing environments for more successful interaction, both with the environment and with another participant in a remote location. At the University of North Carolina, a VR simulation of a the planned Sitterson Hall, revealed a flaw in the building's design that had not been observed during examination of the more traditional building plan simulation methods on paper and on computer aided design (CAD) work station. The virtual environment enables multiple participants in remote locations to come together and interact with one another and with the environment. Each participant is capable of seeing herself and the other participants and of interacting with them within the simulated environment.
Virtual Gaming Simulation in Nursing Education: A Focus Group Study.
Verkuyl, Margaret; Hughes, Michelle; Tsui, Joyce; Betts, Lorraine; St-Amant, Oona; Lapum, Jennifer L
2017-05-01
The use of serious gaming in a virtual world is a novel pedagogical approach in nursing education. A virtual gaming simulation was implemented in a health assessment class that focused on mental health and interpersonal violence. The study's purpose was to explore students' experiences of the virtual gaming simulation. Three focus groups were conducted with a convenience sample of 20 first-year nursing students after they completed the virtual gaming simulation. Analysis yielded five themes: (a) Experiential Learning, (b) The Learning Process, (c) Personal Versus Professional, (d) Self-Efficacy, and (e) Knowledge. Virtual gaming simulation can provide experiential learning opportunities that promote engagement and allow learners to acquire and apply new knowledge while practicing skills in a safe and realistic environment. [J Nurs Educ. 2017;56(5):274-280.]. Copyright 2017, SLACK Incorporated.
Lorenzo Álvarez, R; Pavía Molina, J; Sendra Portero, F
2018-03-20
Three-dimensional virtual environments enable very realistic ludic, social, cultural, and educational activities to be carried out online. Second Life ® is one of the most well-known virtual environments, in which numerous training activities have been developed for healthcare professionals, although none about radiology. The aim of this article is to present the technical resources and educational activities that Second Life ® offers for training in radiology based on our experience since 2011 with diverse training activities for undergraduate and postgraduate students. Second Life ® is useful for carrying out radiology training activities online through remote access in an attractive scenario, especially for current generations of students and residents. More than 800 participants have reported in individual satisfaction surveys that their experiences with this approach have been interesting and useful for their training in radiology. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Tangible display systems: bringing virtual surfaces into the real world
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2012-03-01
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
The Fine Art of Teaching Functions
ERIC Educational Resources Information Center
Davis, Anna A.; Joswick, Candace
2018-01-01
The correct use of visual perspective is one of the main reasons that virtual reality environments and realistic works of art look lifelike. Geometric construction techniques used by artists to achieve an accurate perspective effect were developed during the Renaissance. With the rise of computer graphics, translating the geometric ideas of 600…
Using virtual reality to explore self-regulation in high-risk settings.
Kniffin, Tracey C; Carlson, Charles R; Ellzey, Antonio; Eisenlohr-Moul, Tory; Beck, Kelly Battle; McDonald, Renee; Jouriles, Ernest N
2014-10-01
Virtual reality (VR) models allow investigators to explore high-risk situations carefully in the laboratory using physiological assessment strategies and controlled conditions not available in field settings. This article introduces the use of a virtual experience to examine the influence of self-regulatory skills training on female participants' reactions to a high-risk encounter with an aggressive male. Sixty-three female participants were recruited for the study. Demographic data indicated that 54% of the participants were not currently in a relationship, 36.5% were in a committed relationship, and 9.5% were occasionally dating. After obtaining informed consent, participants were assigned randomly to either a diaphragmatic breathing training condition or an attention control condition. Results indicated that both groups rated the virtual environment as equally realistic; the aggressive advances of the male were also perceived as equally real across the two experimental groups. Physiological data indicated that there were no differences between the groups on respiration or cardiovascular measures during baseline or during the VR task. After the VR experience, however, the participants in the breathing training condition had lower respiration rates and higher heart rate variability measures than those in the control condition. The results suggest that VR platforms provide a realistic and challenging environment to examine how self-regulation procedures may influence behavioral outcomes. Real-time dynamic engagement in a virtual setting affords investigators with an opportunity to evaluate the utility of self-regulatory skills training for improving safety in situations where there are uncertain and risky outcomes. © The Author(s) 2014.
High Resolution Visualization Applied to Future Heavy Airlift Concept Development and Evaluation
NASA Technical Reports Server (NTRS)
FordCook, A. B.; King, T.
2012-01-01
This paper explores the use of high resolution 3D visualization tools for exploring the feasibility and advantages of future military cargo airlift concepts and evaluating compatibility with existing and future payload requirements. Realistic 3D graphic representations of future airlifters are immersed in rich, supporting environments to demonstrate concepts of operations to key personnel for evaluation, feedback, and development of critical joint support. Accurate concept visualizations are reviewed by commanders, platform developers, loadmasters, soldiers, scientists, engineers, and key principal decision makers at various stages of development. The insight gained through the review of these physically and operationally realistic visualizations is essential to refining design concepts to meet competing requirements in a fiscally conservative defense finance environment. In addition, highly accurate 3D geometric models of existing and evolving large military vehicles are loaded into existing and proposed aircraft cargo bays. In this virtual aircraft test-loading environment, materiel developers, engineers, managers, and soldiers can realistically evaluate the compatibility of current and next-generation airlifters with proposed cargo.
Decentralized real-time simulation of forest machines
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael
2000-10-01
To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.
NASA Astrophysics Data System (ADS)
Demir, I.
2014-12-01
Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.
Vermeeren, Günter; Joseph, Wout; Martens, Luc
2013-04-01
Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models. Copyright © 2012 Wiley Periodicals, Inc.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
Williamon, Aaron; Aufegger, Lisa; Eiholzer, Hubert
2014-01-01
Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all) how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of "real" performance could be recreated. Advanced violin students (n = 11) were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three "expert" virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for performance training.
Williamon, Aaron; Aufegger, Lisa; Eiholzer, Hubert
2014-01-01
Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all) how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of “real” performance could be recreated. Advanced violin students (n = 11) were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three “expert” virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for performance training. PMID:24550856
Virtual reality for emergency training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altinkemer, K.
1995-12-31
Virtual reality is a sequence of scenes generated by a computer as a response to the five different senses. These senses are sight, sound, taste, touch, smell. Other senses that can be used in virtual reality include balance, pheromonal, and immunological senses. Many application areas include: leisure and entertainment, medicine, architecture, engineering, manufacturing, and training. Virtual reality is especially important when it is used for emergency training and management of natural disasters including earthquakes, floods, tornados and other situations which are hard to emulate. Classical training methods for these extraordinary environments lack the realistic surroundings that virtual reality can provide.more » In order for virtual reality to be a successful training tool the design needs to include certain aspects; such as how real virtual reality should be and how much fixed cost is entailed in setting up the virtual reality trainer. There are also pricing questions regarding the price per training session on virtual reality trainer, and the appropriate training time length(s).« less
Fang, Yibin; Yu, Ying; Cheng, Jiyong; Wang, Shengzhang; Wang, Kuizhong; Liu, Jian-Min; Huang, Qinghai
2013-01-01
Adjusting hemodynamics via flow diverter (FD) implantation is emerging as a novel method of treating cerebral aneurysms. However, most previous FD-related hemodynamic studies were based on virtual FD deployment, which may produce different hemodynamic outcomes than realistic (in vivo) FD deployment. We compared hemodynamics between virtual FD and realistic FD deployments in rabbit aneurysm models using computational fluid dynamics (CFD) simulations. FDs were implanted for aneurysms in 14 rabbits. Vascular models based on rabbit-specific angiograms were reconstructed for CFD studies. Real FD configurations were reconstructed based on micro-CT scans after sacrifice, while virtual FD configurations were constructed with SolidWorks software. Hemodynamic parameters before and after FD deployment were analyzed. According to the metal coverage (MC) of implanted FDs calculated based on micro-CT reconstruction, 14 rabbits were divided into two groups (A, MC >35%; B, MC <35%). Normalized mean wall shear stress (WSS), relative residence time (RRT), inflow velocity, and inflow volume in Group A were significantly different (P<0.05) from virtual FD deployment, but pressure was not (P>0.05). The normalized mean WSS in Group A after realistic FD implantation was significantly lower than that of Group B. All parameters in Group B exhibited no significant difference between realistic and virtual FDs. This study confirmed MC-correlated differences in hemodynamic parameters between realistic and virtual FD deployment. PMID:23823503
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
Interactive browsing of 3D environment over the Internet
NASA Astrophysics Data System (ADS)
Zhang, Cha; Li, Jin
2000-12-01
In this paper, we describe a system for wandering in a realistic environment over the Internet. The environment is captured by the concentric mosaic, compressed via the reference block coder (RBC), and accessed and delivered over the Internet through the virtual media (Vmedia) access protocol. Capturing the environment through the concentric mosaic is easy. We mount a camera at the end of a level beam, and shoot images as the beam rotates. The huge dataset of the concentric mosaic is then compressed through the RBC, which is specifically designed for both high compression efficiency and just-in-time (JIT) rendering. Through the JIT rendering function, only a portion of the RBC bitstream is accessed, decoded and rendered for each virtual view. A multimedia communication protocol -- the Vmedia protocol, is then proposed to deliver the compressed concentric mosaic data over the Internet. Only the bitstream segments corresponding to the current view are streamed over the Internet. Moreover, the delivered bitstream segments are managed by a local Vmedia cache so that frequently used bitstream segments need not be streamed over the Internet repeatedly, and the Vmedia is able to handle a RBC bitstream larger than its memory capacity. A Vmedia concentric mosaic interactive browser is developed where the user can freely wander in a realistic environment, e.g., rotate around, walk forward/backward and sidestep, even under a tight bandwidth of 33.6 kbps.
Learning English with "The Sims": Exploiting Authentic Computer Simulation Games for L2 Learning
ERIC Educational Resources Information Center
Ranalli, Jim
2008-01-01
With their realistic animation, complex scenarios and impressive interactivity, computer simulation games might be able to provide context-rich, cognitively engaging virtual environments for language learning. However, simulation games designed for L2 learners are in short supply. As an alternative, could games designed for the mass-market be…
Cyber Operations Virtual Environment
2010-09-01
automated system affects reliance on that system (e.g., Dzindolet, Peterson , Pomranky, Pierce, & Beck, 2003; Lee & Moray, 1994; Lee & See, 2004...described a need for instruction to enable interactive, realistic training ( Hershey , 2008): Network Warfare and Operations Distributed Training...knowledge or needs beyond this shallow level (Beck, Stern, & Haugsjaa, 1996 ). The immediate feedback model employed in behaviorist learning has
Levy
1996-08-01
New interactive computer technologies are having a significant influence on medical education, training, and practice. The newest innovation in computer technology, virtual reality, allows an individual to be immersed in a dynamic computer-generated, three-dimensional environment and can provide realistic simulations of surgical procedures. A new virtual reality hysteroscope passes through a sensing device that synchronizes movements with a three-dimensional model of a uterus. Force feedback is incorporated into this model, so the user actually experiences the collision of an instrument against the uterine wall or the sensation of the resistance or drag of a resectoscope as it cuts through a myoma in a virtual environment. A variety of intrauterine pathologies and procedures are simulated, including hyperplasia, cancer, resection of a uterine septum, polyp, or myoma, and endometrial ablation. This technology will be incorporated into comprehensive training programs that will objectively assess hand-eye coordination and procedural skills. It is possible that by incorporating virtual reality into hysteroscopic training programs, a decrease in the learning curve and the number of complications presently associated with the procedures may be realized. Prospective studies are required to assess these potential benefits.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
Okrainec, A; Farcas, M; Henao, O; Choy, I; Green, J; Fotoohi, M; Leslie, R; Wight, D; Karam, P; Gonzalez, N; Apkarian, J
2009-01-01
The Veress needle is the most commonly used technique for creating the pneumoperitoneum at the start of a laparoscopic surgical procedure. Inserting the Veress needle correctly is crucial since errors can cause significant harm to patients. Unfortunately, this technique can be difficult to teach since surgeons rely heavily on tactile feedback while advancing the needle through the various layers of the abdominal wall. This critical step in laparoscopy, therefore, can be challenging for novice trainees to learn without adequate opportunities to practice in a safe environment with no risk of injury to patients. To address this issue, we have successfully developed a prototype of a virtual reality haptic needle insertion simulator using the tactile feedback of 22 surgeons to set realistic haptic parameters. A survey of these surgeons concluded that our device appeared and felt realistic, and could potentially be a useful tool for teaching the proper technique of Veress needle insertion.
Supporting the Outdoor Classroom: An Archaeo-Astronomy Project
ERIC Educational Resources Information Center
Brown, Daniel; Francis, Robert; Alder, Andy
2013-01-01
Field trips and the outdoor classroom are a vital part of many areas of education. Ideally, the content should be taught within a realistic environment rather than just by providing a single field trip at the end of a course. The archaeo-astronomy project located at Nottingham Trent University envisages the development of a virtual environment…
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
[What do virtual reality tools bring to child and adolescent psychiatry?
Bioulac, S; de Sevin, E; Sagaspe, P; Claret, A; Philip, P; Micoulaud-Franchi, J A; Bouvard, M P
2018-06-01
Virtual reality is a relatively new technology that enables individuals to immerse themselves in a virtual world. It offers several advantages including a more realistic, lifelike environment that may allow subjects to "forget" they are being assessed, allow a better participation and an increased generalization of learning. Moreover, the virtual reality system can provide multimodal stimuli, such as visual and auditory stimuli, and can also be used to evaluate the patient's multimodal integration and to aid rehabilitation of cognitive abilities. The use of virtual reality to treat various psychiatric disorders in adults (phobic anxiety disorders, post-traumatic stress disorder, eating disorders, addictions…) and its efficacy is supported by numerous studies. Similar research for children and adolescents is lagging behind. This may be particularly beneficial to children who often show great interest and considerable success on computer, console or videogame tasks. This article will expose the main studies that have used virtual reality with children and adolescents suffering from psychiatric disorders. The use of virtual reality to treat anxiety disorders in adults is gaining popularity and its efficacy is supported by various studies. Most of the studies attest to the significant efficacy of the virtual reality exposure therapy (or in virtuo exposure). In children, studies have covered arachnophobia social anxiety and school refusal phobia. Despite the limited number of studies, results are very encouraging for treatment in anxiety disorders. Several studies have reported the clinical use of virtual reality technology for children and adolescents with autistic spectrum disorders (ASD). Extensive research has proven the efficiency of technologies as support tools for therapy. Researches are found to be focused on communication and on learning and social imitation skills. Virtual reality is also well accepted by subjects with ASD. The virtual environment offers the opportunity to administer controlled tasks such as the typical neuropsychological tools, but in an environment much more like a standard classroom. The virtual reality classroom offers several advantages compared to classical tools such as more realistic and lifelike environment but also records various measures in standardized conditions. Most of the studies using a virtual classroom have found that children with Attention Deficit/Hyperactivity Disorder make significantly fewer correct hits and more commission errors compared with controls. The virtual classroom has proven to be a good clinical tool for evaluation of attention in ADHD. For eating disorders, cognitive behavioural therapy (CBT) program enhanced by a body image specific component using virtual reality techniques was shown to be more efficient than cognitive behavioural therapy alone. The body image-specific component using virtual reality techniques boots efficiency and accelerates the CBT change process for eating disorders. Virtual reality is a relatively new technology and its application in child and adolescent psychiatry is recent. However, this technique is still in its infancy and much work is needed including controlled trials before it can be introduced in routine clinical use. Virtual reality interventions should also investigate how newly acquired skills are transferred to the real world. At present virtual reality can be considered a useful tool in evaluation and treatment for child and adolescent disorders. Copyright © 2017 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Virtual reality simulation: basic concepts and use in endoscopic neurosurgery training.
Cohen, Alan R; Lohani, Subash; Manjila, Sunil; Natsupakpong, Suriya; Brown, Nathan; Cavusoglu, M Cenk
2013-08-01
Virtual reality simulation is a promising alternative to training surgical residents outside the operating room. It is also a useful aide to anatomic study, residency training, surgical rehearsal, credentialing, and recertification. Surgical simulation is based on a virtual reality with varying degrees of immersion and realism. Simulators provide a no-risk environment for harmless and repeatable practice. Virtual reality has three main components of simulation: graphics/volume rendering, model behavior/tissue deformation, and haptic feedback. The challenge of accurately simulating the forces and tactile sensations experienced in neurosurgery limits the sophistication of a virtual simulator. The limited haptic feedback available in minimally invasive neurosurgery makes it a favorable subject for simulation. Virtual simulators with realistic graphics and force feedback have been developed for ventriculostomy, intraventricular surgery, and transsphenoidal pituitary surgery, thus allowing preoperative study of the individual anatomy and increasing the safety of the procedure. The authors also present experiences with their own virtual simulation of endoscopic third ventriculostomy.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Split2 Protein-Ligation Generates Active IL-6-Type Hyper-Cytokines from Inactive Precursors.
Moll, Jens M; Wehmöller, Melanie; Frank, Nils C; Homey, Lisa; Baran, Paul; Garbers, Christoph; Lamertz, Larissa; Axelrod, Jonathan H; Galun, Eithan; Mootz, Henning D; Scheller, Jürgen
2017-12-15
Trans-signaling of the major pro- and anti-inflammatory cytokines Interleukin (IL)-6 and IL-11 has the unique feature to virtually activate all cells of the body and is critically involved in chronic inflammation and regeneration. Hyper-IL-6 and Hyper-IL-11 are single chain designer trans-signaling cytokines, in which the cytokine and soluble receptor units are trapped in one complex via a flexible peptide linker. Albeit, Hyper-cytokines are essential tools to study trans-signaling in vitro and in vivo, the superior potency of these designer cytokines are accompanied by undesirable stress responses. To enable tailor-made generation of Hyper-cytokines, we developed inactive split-cytokine-precursors adapted for posttranslational reassembly by split-intein mediated protein trans-splicing (PTS). We identified cutting sites within IL-6 (E 134 /S 135 ) and IL-11 (G 116 /S 117 ) and obtained inactive split-Hyper-IL-6 and split-Hyper-IL-11 cytokine precursors. After fusion with split-inteins, PTS resulted in reconstitution of active Hyper-cytokines, which were efficiently secreted from transfected cells. Our strategy comprises the development of a background-free cytokine signaling system from reversibly inactivated precursor cytokines.
Anesthesiology training using 3D imaging and virtual reality
NASA Astrophysics Data System (ADS)
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
1996-04-01
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
Virtual Planetary Analysis Environment for Remote Science
NASA Technical Reports Server (NTRS)
Keely, Leslie; Beyer, Ross; Edwards. Laurence; Lees, David
2009-01-01
All of the data for NASA's current planetary missions and most data for field experiments are collected via orbiting spacecraft, aircraft, and robotic explorers. Mission scientists are unable to employ traditional field methods when operating remotely. We have developed a virtual exploration tool for remote sites with data analysis capabilities that extend human perception quantitatively and qualitatively. Scientists and mission engineers can use it to explore a realistic representation of a remote site. It also provides software tools to "touch" and "measure" remote sites with an immediacy that boosts scientific productivity and is essential for mission operations.
Teleoperation with virtual force feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.J.
1993-08-01
In this paper we describe an algorithm for generating virtual forces in a bilateral teleoperator system. The virtual forces are generated from a world model and are used to provide real-time obstacle avoidance and guidance capabilities. The algorithm requires that the slaves tool and every object in the environment be decomposed into convex polyhedral Primitives. Intrusion distance and extraction vectors are then derived at every time step by applying Gilbert`s polyhedra distance algorithm, which has been adapted for the task. This information is then used to determine the compression and location of nonlinear virtual spring-dampers whose total force is summedmore » and applied to the manipulator/teleoperator system. Experimental results validate the whole approach, showing that it is possible to compute the algorithm and generate realistic, useful psuedo forces for a bilateral teleoperator system using standard VME bus hardware.« less
A methodological, task-based approach to Procedure-Specific Simulations training.
Setty, Yaki; Salzman, Oren
2016-12-01
Procedure-Specific Simulations (PSS) are 3D realistic simulations that provide a platform to practice complete surgical procedures in a virtual-reality environment. While PSS have the potential to improve surgeons' proficiency, there are no existing standards or guidelines for PSS development in a structured manner. We employ a unique platform inspired by game design to develop virtual reality simulations in three dimensions of urethrovesical anastomosis during radical prostatectomy. 3D visualization is supported by a stereo vision, providing a fully realistic view of the simulation. The software can be executed for any robotic surgery platform. Specifically, we tested the simulation under windows environment on the RobotiX Mentor. Using urethrovesical anastomosis during radical prostatectomy simulation as a representative example, we present a task-based methodological approach to PSS training. The methodology provides tasks in increasing levels of difficulty from a novice level of basic anatomy identification, to an expert level that permits testing new surgical approaches. The modular methodology presented here can be easily extended to support more complex tasks. We foresee this methodology as a tool used to integrate PSS as a complementary training process for surgical procedures.
NASA Technical Reports Server (NTRS)
Murphy, James R.; Otto, Neil M.
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Technical Reports Server (NTRS)
Murphy, Jim; Otto, Neil
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
Measurable realistic image-based 3D mapping
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.
2011-12-01
Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.
Model-based sensorimotor integration for multi-joint control: development of a virtual arm model.
Song, D; Lan, N; Loeb, G E; Gordon, J
2008-06-01
An integrated, sensorimotor virtual arm (VA) model has been developed and validated for simulation studies of control of human arm movements. Realistic anatomical features of shoulder, elbow and forearm joints were captured with a graphic modeling environment, SIMM. The model included 15 musculotendon elements acting at the shoulder, elbow and forearm. Muscle actions on joints were evaluated by SIMM generated moment arms that were matched to experimentally measured profiles. The Virtual Muscle (VM) model contained appropriate admixture of slow and fast twitch fibers with realistic physiological properties for force production. A realistic spindle model was embedded in each VM with inputs of fascicle length, gamma static (gamma(stat)) and dynamic (gamma(dyn)) controls and outputs of primary (I(a)) and secondary (II) afferents. A piecewise linear model of Golgi Tendon Organ (GTO) represented the ensemble sampling (I(b)) of the total muscle force at the tendon. All model components were integrated into a Simulink block using a special software tool. The complete VA model was validated with open-loop simulation at discrete hand positions within the full range of alpha and gamma drives to extrafusal and intrafusal muscle fibers. The model behaviors were consistent with a wide variety of physiological phenomena. Spindle afferents were effectively modulated by fusimotor drives and hand positions of the arm. These simulations validated the VA model as a computational tool for studying arm movement control. The VA model is available to researchers at website http://pt.usc.edu/cel .
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
Realistic Reflections for Marine Environments in Augmented Reality Training Systems
2009-09-01
Static Backgrounds. Top: Agua Background. Bottom: Blue Background.............48 Figure 27. Ship Textures Used to Generate Reflections. In Order from...Like virtual simulations, augmented reality trainers can be configured to meet specific training needs and can be restarted and reused to train...Wave Distortion, Blurring and Shadow Many of the same methods outlined in Full Reflection shader were reused for the Physics shader. The same
Virtual reality in surgical training.
Lange, T; Indelicato, D J; Rosen, J M
2000-01-01
Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.
Efficacy of virtual reality in pedestrian safety research.
Deb, Shuchisnigdha; Carruth, Daniel W; Sween, Richard; Strawderman, Lesley; Garrison, Teena M
2017-11-01
Advances in virtual reality technology present new opportunities for human factors research in areas that are dangerous, difficult, or expensive to study in the real world. The authors developed a new pedestrian simulator using the HTC Vive head mounted display and Unity software. Pedestrian head position and orientation were tracked as participants attempted to safely cross a virtual signalized intersection (5.5 m). In 10% of 60 trials, a vehicle violated the traffic signal and in 10.84% of these trials, a collision between the vehicle and the pedestrian was observed. Approximately 11% of the participants experienced simulator sickness and withdrew from the study. Objective measures, including the average walking speed, indicate that participant behavior in VR matches published real world norms. Subjective responses indicate that the virtual environment was realistic and engaging. Overall, the study results confirm the effectiveness of the new virtual reality technology for research on full motion tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Candioti, Lorenzo; Bauville, Arthur; Picazo, Suzanne; Mohn, Geoffroy; Kaus, Boris
2016-04-01
Hyper-extended magma-poor margins are characterized by extremely thinned crust and partially serpentinized mantle exhumation. As this can act as a zone of weakness during a subsequent compression event, a hyper-extended margin can thus potentially facilitate subduction initiation. Hyper-extended margins are also found today as passive margins fringing the Atlantic and North Atlantic ocean, e.g. Iberia and New Foundland margins [1] and Porcupine, Rockwall and Hatton basins. It has been proposed in the literature that hyper-extension in the Alpine Tethys does not exceed ~600 km in width [2]. The geodynamical evolution of the Alpine and Atlantic passive margins are distinct: no subduction is yet initiated in the North Atlantic, whereas the Alpine Tethys basin has undergone subduction. Here, we investigate the control of the presence of a hyper-extended margin on subduction initiation. We perform high resolution 2D simulations considering realistic rheologies and temperature profiles for these locations. We systematically vary the length and thickness of the hyper-extended crust and serpentinized mantle, to better understand the conditions for subduction initiation. References: [1] G. Manatschal. New models for evolution of magma-poor rifted margins based on a review of data and concepts from West Iberia and the Alps. Int J Earth Sci (Geol Rundsch) (2004); 432-466. [2] G. Mohn, G. Manatschal, M. Beltrando, I. Haupert. The role of rift-inherited hyper-extension in alpine-type orogens. Terra Nova (2014); 347-353.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598
de Boer, I R; Wesselink, P R; Vervoorn, J M
2013-11-01
To describe the development and opportunities for implementation of virtual teeth with and without pathology for use in a virtual learning environment in dental education. The creation of virtual teeth begins by scanning a tooth with a cone beam CT. The resulting scan consists of multiple two-dimensional grey-scale images. The specially designed software program ColorMapEditor connects these two-dimensional images to create a three-dimensional tooth. With this software, any aspect of the tooth can be modified, including its colour, volume, shape and density, resulting in the creation of virtual teeth of any type. This article provides examples of realistic virtual teeth with and without pathology that can be used for dental education. ColorMapEditor offers infinite possibilities to adjust and add options for the optimisation of virtual teeth. Virtual teeth have unlimited availability for dental students, allowing them to practise as often as required. Virtual teeth can be made and adjusted to any shape with any type of pathology. Further developments in software and hardware technology are necessary to refine the ability to colour and shape the interior of the pulp chamber and surface of the tooth to enable not only treatment but also diagnostics and thus create a greater degree of realism. The creation and use of virtual teeth in dental education appears to be feasible but is still in development; it offers many opportunities for the creation of teeth with various pathologies, although an evaluation of its use in dental education is still required. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Cooperative IDS Approach Against MPTCP Attacks
2017-06-01
physical testbeds in order to present a methodology that allows distributed IDSs (DIDS) to cooperate in a manner that permits effective detection of...reconstruct MPTCP subflows and detect malicious content. Next, we build physical testbeds in order to present a methodology that allows distributed IDSs...hypotheses on a more realistic testbed environment. • Developing a methodology to incorporate multiple IDSs, real and virtual, to be able to detect cross
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.
Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A
2017-02-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model
Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.
2017-01-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746
ERIC Educational Resources Information Center
Woodfield, Brian F.; Andrus, Merritt B.; Waddoups, Gregory L.; Moore, Melissa S.; Swan, Richard; Allen, Rob; Bodily, Greg; Andersen, Tricia; Miller, Jordan; Simmons, Bryon; Stanger, Richard
2005-01-01
A set of sophisticated and realistic laboratory simulations is created for use in freshman- and sophomore-level chemistry classes and laboratories called 'Virtual ChemLab'. A detailed assessment of student responses is provided and the simulation's pedagogical utility is described using the organic simulation.
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
ERIC Educational Resources Information Center
Aufdenspring, Gary; Aufdenspring, Deborah
1992-01-01
Describes how HyperCard software can be used to direct students to databases, applications, and explanations in an online environment. The use of HyperCard with other software is discussed; using HyperCard to set up tutorials is explained; and limitations are addressed, including the amount of memory needed and the speed of the hardware. (LRW)
An Analysis of Hardware-Assisted Virtual Machine Based Rootkits
2014-06-01
certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T
2007-07-01
Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.
A rapid algorithm for realistic human reaching and its use in a virtual reality system
NASA Technical Reports Server (NTRS)
Aldridge, Ann; Pandya, Abhilash; Goldsby, Michael; Maida, James
1994-01-01
The Graphics Analysis Facility (GRAF) at JSC has developed a rapid algorithm for computing realistic human reaching. The algorithm was applied to GRAF's anthropometrically correct human model and used in a 3D computer graphics system and a virtual reality system. The nature of the algorithm and its uses are discussed.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.
Virtual reality based surgery simulation for endoscopic gynaecology.
Székely, G; Bajka, M; Brechbühler, C; Dual, J; Enzler, R; Haller, U; Hug, J; Hutter, R; Ironmonger, N; Kauer, M; Meier, V; Niederer, P; Rhomberg, A; Schmid, P; Schweitzer, G; Thaler, M; Vuskovic, V; Tröster, G
1999-01-01
Virtual reality (VR) based surgical simulator systems offer very elegant possibilities to both enrich and enhance traditional education in endoscopic surgery. However, while a wide range of VR simulator systems have been proposed and realized in the past few years, most of these systems are far from able to provide a reasonably realistic surgical environment. We explore the basic approaches to the current limits of realism and ultimately seek to extend these based on our description and analysis of the most important components of a VR-based endoscopic simulator. The feasibility of the proposed techniques is demonstrated on a first modular prototype system implementing the basic algorithms for VR-training in gynaecologic laparoscopy.
a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application
NASA Astrophysics Data System (ADS)
Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.
2017-11-01
Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.
A succinct overview of virtual reality technology use in Alzheimer's disease.
García-Betances, Rebeca I; Arredondo Waldmeyer, María Teresa; Fico, Giuseppe; Cabrera-Umpiérrez, María Fernanda
2015-01-01
We provide a brief review and appraisal of recent and current virtual reality (VR) technology for Alzheimer's disease (AD) applications. We categorize them according to their intended purpose (e.g., diagnosis, patient cognitive training, caregivers' education, etc.), focus feature (e.g., spatial impairment, memory deficit, etc.), methodology employed (e.g., tasks, games, etc.), immersion level, and passive or active interaction. Critical assessment indicates that most of them do not yet take full advantage of virtual environments with high levels of immersion and interaction. Many still rely on conventional 2D graphic displays to create non-immersive or semi-immersive VR scenarios. Important improvements are needed to make VR a better and more versatile assessment and training tool for AD. The use of the latest display technologies available, such as emerging head-mounted displays and 3D smart TV technologies, together with realistic multi-sensorial interaction devices, and neuro-physiological feedback capacity, are some of the most beneficial improvements this mini-review suggests. Additionally, it would be desirable that such VR applications for AD be easily and affordably transferable to in-home and nursing home environments.
2012-01-01
us.army.mil ABSTRACT Scenario-based training exemplifies the learning-by-doing approach to human performance improvement. In this paper , we enumerate...through a narrative, mission, quest, or scenario. In this paper we argue for a combinatorial optimization search approach to selecting and ordering...the role of an expert for the purposes of practicing skills and knowledge in realistic situations in a learning-by-doing approach to performance
Abdi, Elahe; Bouri, Mohamed; Burdet, Etienne; Himidan, Sharifa; Bleuler, Hannes
2017-07-01
We have investigated how surgeons can use the foot to position a laparoscopic endoscope, a task that normally requires an extra assistant. Surgeons need to train in order to exploit the possibilities offered by this new technique and safely manipulate the endoscope together with the hands movements. A realistic abdominal cavity has been developed as training simulator to investigate this multi-arm manipulation. In this virtual environment, the surgeon's biological hands are modelled as laparoscopic graspers while the viewpoint is controlled by the dominant foot. 23 surgeons and medical students performed single-handed and bimanual manipulation in this environment. The results show that residents had superior performance compared to both medical students and more experienced surgeons, suggesting that residency is an ideal period for this training. Performing the single-handed task improves the performance in the bimanual task, whereas the converse was not true.
Hypermedia = hypercommunication
NASA Technical Reports Server (NTRS)
Laff, Mark R.
1990-01-01
New hardware and software technology gave application designers the freedom to use new realism in human computer interaction. High-quality images, motion video, stereo sound and music, speech, touch, gesture provide richer data channels between the person and the machine. Ultimately, this will lead to richer communication between people with the computer as an intermediary. The whole point of hyper-books, hyper-newspapers, virtual worlds, is to transfer the concept and relationships, the 'data structure', from the mind of creator to that of user. Some of the characteristics of this rich information channel are discussed, and some examples are presented.
Poeschl, Sandra; Doering, Nicola
2014-01-01
Realistic models in virtual reality training applications are considered to positively influence presence and performance. The experimental study presented, analyzed the effect of simulation fidelity (static vs. animated audience) on presence as a prerequisite for performance in a prototype virtual fear of public speaking application with a sample of N = 40 academic non-phobic users. Contrary to the state of research, no influence was shown on virtual presence and perceived realism, but an animated audience led to significantly higher effects in anxiety during giving a talk. Although these findings could be explained by an application that might not have been realistic enough, they still question the role of presence as a mediating factor in virtual exposure applications.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java
NASA Astrophysics Data System (ADS)
Cao, Zaihui; hu, Zhongyan
Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.
Virtual geotechnical laboratory experiments using a simulator
NASA Astrophysics Data System (ADS)
Penumadu, Dayakar; Zhao, Rongda; Frost, David
2000-04-01
The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.
Follow-the-Leader Control for the PIPS Prototype Hardware
NASA Technical Reports Server (NTRS)
Williams, Robert L. II; Lippitt, Thimas
1996-01-01
This report describes the payload inspection and processing system (PIPS), an automated system programmed off-line for inspection of space shuttle payloads after integration and prior to launch. PIPS features a hyper-redundant 18-degree of freedom (DOF) serpentine truss manipulator capable of snake like motions to avoid obstacles. During the summer of 1995, the author worked on the same project, developing a follow-the-leader (FTL) algorithm in graphical simulation which ensures whole arm collision avoidance by forcing ensuing links to follow the same tip trajectory. The summer 1996 work was to control the prototype PIPS hardware in follow-the-leader mode. The project was successful in providing FTL control in hardware. The STS-82 payload mockup was used in the laboratory to demonstrate serpentine motions to avoid obstacles in a realistic environment.
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-01-01
This paper presents control performances of a new type of four-degrees-of-freedom (4-DOF) haptic master that can be used for robot-assisted minimally invasive surgery (RMIS). By adopting a controllable electrorheological (ER) fluid, the function of the proposed master is realized as a haptic feedback as well as remote manipulation. In order to verify the efficacy of the proposed master and method, an experiment is conducted with deformable objects featuring human organs. Since the use of real human organs is difficult for control due to high cost and moral hazard, an excellent alternative method, the virtual reality environment, is used for control in this work. In order to embody a human organ in the virtual space, the experiment adopts a volumetric deformable object represented by a shape-retaining chain linked (S-chain) model which has salient properties such as fast and realistic deformation of elastic objects. In haptic architecture for RMIS, the desired torque/force and desired position originating from the object of the virtual slave and operator of the haptic master are transferred to each other. In order to achieve the desired torque/force trajectories, a sliding mode controller (SMC) which is known to be robust to uncertainties is designed and empirically implemented. Tracking control performances for various torque/force trajectories from the virtual slave are evaluated and presented in the time domain.
NASA Astrophysics Data System (ADS)
Palestini, C.; Basso, A.
2017-11-01
In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.
NASA Astrophysics Data System (ADS)
Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati
2012-01-01
Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.
Robust measurement of supernova ν e spectra with future neutrino detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikrant, Alex; Laha, Ranjan; Horiuchi, Shunsaku
Measuring precise all-flavor neutrino information from a supernova is crucial for understanding the core-collapse process as well as neutrino properties. We apply a chi-squared analysis for different detector setups to explore determination of ν e spectral parameters. Using a long-term two-dimensional core-collapse simulation with three time-varying spectral parameters, we generate mock data to examine the capabilities of the current Super-Kamiokande detector and compare the relative improvements that gadolinium, Hyper-Kamiokande, and DUNE would have. We show that in a realistic three spectral parameter framework, the addition of gadolinium to Super-Kamiokande allows for a qualitative improvement in νe determination. Efficient neutron taggingmore » will allow Hyper-Kamiokande to constrain spectral information more strongly in both the accretion and cooling phases. Overall, significant improvements will be made by Hyper-Kamiokande and DUNE, allowing for much more precise determination of ν e spectral parameters.« less
Robust measurement of supernova ν e spectra with future neutrino detectors
Nikrant, Alex; Laha, Ranjan; Horiuchi, Shunsaku
2018-01-25
Measuring precise all-flavor neutrino information from a supernova is crucial for understanding the core-collapse process as well as neutrino properties. We apply a chi-squared analysis for different detector setups to explore determination of ν e spectral parameters. Using a long-term two-dimensional core-collapse simulation with three time-varying spectral parameters, we generate mock data to examine the capabilities of the current Super-Kamiokande detector and compare the relative improvements that gadolinium, Hyper-Kamiokande, and DUNE would have. We show that in a realistic three spectral parameter framework, the addition of gadolinium to Super-Kamiokande allows for a qualitative improvement in νe determination. Efficient neutron taggingmore » will allow Hyper-Kamiokande to constrain spectral information more strongly in both the accretion and cooling phases. Overall, significant improvements will be made by Hyper-Kamiokande and DUNE, allowing for much more precise determination of ν e spectral parameters.« less
Chuah, Joon Hao; Lok, Benjamin; Black, Erik
2013-04-01
Health sciences students often practice and are evaluated on interview and exam skills by working with standardized patients (people that role play having a disease or condition). However, standardized patients do not exist for certain vulnerable populations such as children and the intellectually disabled. As a result, students receive little to no exposure to vulnerable populations before becoming working professionals. To address this problem and thereby increase exposure to vulnerable populations, we propose using virtual humans to simulate members of vulnerable populations. We created a mixed reality pediatric patient that allowed students to practice pediatric developmental exams. Practicing several exams is necessary for students to understand how to properly interact with and correctly assess a variety of children. Practice also increases a student's confidence in performing the exam. Effective practice requires students to treat the virtual child realistically. Treating the child realistically might be affected by how the student and virtual child physically interact, so we created two object interaction interfaces - a natural interface and a mouse-based interface. We tested the complete mixed reality exam and also compared the two object interaction interfaces in a within-subjects user study with 22 participants. Our results showed that the participants accepted the virtual child as a child and treated it realistically. Participants also preferred the natural interface, but the interface did not affect how realistically participants treated the virtual child.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments
Rhodes, Paul A.; Anderson, Todd O.
2012-01-01
To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772
Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.
Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas
2018-04-01
Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.
Virtual Worlds for Virtual Organizing
NASA Astrophysics Data System (ADS)
Rhoten, Diana; Lutters, Wayne
The members and resources of a virtual organization are dispersed across time and space, yet they function as a coherent entity through the use of technologies, networks, and alliances. As virtual organizations proliferate and become increasingly important in society, many may exploit the technical architecture s of virtual worlds, which are the confluence of computer-mediated communication, telepresence, and virtual reality originally created for gaming. A brief socio-technical history describes their early origins and the waves of progress followed by stasis that brought us to the current period of renewed enthusiasm. Examination of contemporary examples demonstrates how three genres of virtual worlds have enabled new arenas for virtual organizing: developer-defined closed worlds, user-modifiable quasi-open worlds, and user-generated open worlds. Among expected future trends are an increase in collaboration born virtually rather than imported from existing organizations, a tension between high-fidelity recreations of the physical world and hyper-stylized imaginations of fantasy worlds, and the growth of specialized worlds optimized for particular sectors, companies, or cultures.
Training software using virtual-reality technology and pre-calculated effective dose data.
Ding, Aiping; Zhang, Di; Xu, X George
2009-05-01
This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.
Experiencing Soil Science from your office through virtual experiences
NASA Astrophysics Data System (ADS)
Beato, M. Carmen; González-Merino, Ramón; Campillo, M. Carmen; Fernández-Ahumada, Elvira; Ortiz, Leovigilda; Taguas, Encarnación V.; Guerrero, José Emilio
2017-04-01
Currently, numerous tools based on the new information and communication technologies offer a wide range of possibilities for the implementation of interactive methodologies in Education and Science. In particular, virtual reality and immersive worlds - artificially generated computer environments where users interact through a figurative individual that represents them in that environment (their "avatar") - have been identified as the technology that will change the way we live, particularly in educational terms, product development and entertainment areas (Schmorrow, 2009). Gisbert-Cervera et al. (2011) consider that the 3D worlds in education, among others, provide a unique training and exchange of knowledge environment which allows a goal reflection to support activities and achieve learning outcomes. In Soil Sciences, the experimental component is essential to acquire the necessary knowledge to understand the biogeochemical processes taking place and their interactions with time, climate, topography and living organisms present. In this work, an immersive virtual environment which reproduces a series of pits have been developed to evaluate and differentiate soil characteristics such as texture, structure, consistency, color and other physical-chemical and biological properties for educational purposes. Bibliographical material such as pictures, books, papers and were collected in order to classify the information needed and to build the soil profiles into the virtual environment. The programming language for the virtual recreation was Unreal Engine4 (UE4; https://www.unrealengine.com/unreal-engine-4). This program was chosen because it provides two toolsets for programmers and it can also be used in tandem to accelerate development workflows. In addition, Unreal Engine4 technology powers hundreds of games as well as real-time 3D films, training simulations, visualizations and it creates very realistic graphics. For the evaluation of its impact and its usefulness in teaching, a series of surveys will be presented to undergraduate students and teachers. REFERENCES: Gisbert-Cervera M, Esteve-Gonzalez V., Camacho-Marti M.M. (2011). Delve into the Deep: Learning Potential in Metaverses and 3D Worlds. eLearning (25) Papers ISSN: 1887-1542 Schmorrow D.D. (2009). Why virtual? Theoretical Issues in Ergonomics Science 10(3): 279-282.
Hybrid Reality Lab Capabilities - Video 2
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2016-01-01
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.
Hypermedia or Hyperchaos: Using HyperCard to Teach Medical Decision Making
Smith, W.R.; Hahn, J.S.
1989-01-01
HyperCard presents an uncoventional instructional environment for educators and students, in that it is nonlinear, nonsequential, and it provides innumerable choices of learning paths to learners. The danger of this environment is that it may frustrate learners whose cognitive and learning styles do not match this environment. Leaners who prefer guided learning rather than independent exploration may become distracted or disoriented by this environment, lost in “hyperspace.” In the context of medical education, these ill-matched styles may produce some physicians who have not mastered skills essential to the practice of medicine. The authors have sought to develop a HyperCard learning environment consisting of related programs that teach medical decision making. The environment allows total learner control until the learner demonstrates a need for guidance in order to achieve the essential objectives of the program. A discussion follows of the implications of hypermedia for instructional design and medical education.
A Succinct Overview of Virtual Reality Technology Use in Alzheimer’s Disease
García-Betances, Rebeca I.; Arredondo Waldmeyer, María Teresa; Fico, Giuseppe; Cabrera-Umpiérrez, María Fernanda
2015-01-01
We provide a brief review and appraisal of recent and current virtual reality (VR) technology for Alzheimer’s disease (AD) applications. We categorize them according to their intended purpose (e.g., diagnosis, patient cognitive training, caregivers’ education, etc.), focus feature (e.g., spatial impairment, memory deficit, etc.), methodology employed (e.g., tasks, games, etc.), immersion level, and passive or active interaction. Critical assessment indicates that most of them do not yet take full advantage of virtual environments with high levels of immersion and interaction. Many still rely on conventional 2D graphic displays to create non-immersive or semi-immersive VR scenarios. Important improvements are needed to make VR a better and more versatile assessment and training tool for AD. The use of the latest display technologies available, such as emerging head-mounted displays and 3D smart TV technologies, together with realistic multi-sensorial interaction devices, and neuro-physiological feedback capacity, are some of the most beneficial improvements this mini-review suggests. Additionally, it would be desirable that such VR applications for AD be easily and affordably transferable to in-home and nursing home environments. PMID:26029101
Virtual alternative to the oral examination for emergency medicine residents.
McGrath, Jillian; Kman, Nicholas; Danforth, Douglas; Bahner, David P; Khandelwal, Sorabh; Martin, Daniel R; Nagel, Rollin; Verbeck, Nicole; Way, David P; Nelson, Richard
2015-03-01
The oral examination is a traditional method for assessing the developing physician's medical knowledge, clinical reasoning and interpersonal skills. The typical oral examination is a face-to-face encounter in which examiners quiz examinees on how they would confront a patient case. The advantage of the oral exam is that the examiner can adapt questions to the examinee's response. The disadvantage is the potential for examiner bias and intimidation. Computer-based virtual simulation technology has been widely used in the gaming industry. We wondered whether virtual simulation could serve as a practical format for delivery of an oral examination. For this project, we compared the attitudes and performance of emergency medicine (EM) residents who took our traditional oral exam to those who took the exam using virtual simulation. EM residents (n=35) were randomized to a traditional oral examination format (n=17) or a simulated virtual examination format (n=18) conducted within an immersive learning environment, Second Life (SL). Proctors scored residents using the American Board of Emergency Medicine oral examination assessment instruments, which included execution of critical actions and ratings on eight competency categories (1-8 scale). Study participants were also surveyed about their oral examination experience. We observed no differences between virtual and traditional groups on critical action scores or scores on eight competency categories. However, we noted moderate effect sizes favoring the Second Life group on the clinical competence score. Examinees from both groups thought that their assessment was realistic, fair, objective, and efficient. Examinees from the virtual group reported a preference for the virtual format and felt that the format was less intimidating. The virtual simulated oral examination was shown to be a feasible alternative to the traditional oral examination format for assessing EM residents. Virtual environments for oral examinations should continue to be explored, particularly since they offer an inexpensive, more comfortable, yet equally rigorous alternative.
Virtual Alternative to the Oral Examination for Emergency Medicine Residents
McGrath, Jillian; Kman, Nicholas; Danforth, Douglas; Bahner, David P.; Khandelwal, Sorabh; Martin, Daniel R.; Nagel, Rollin; Verbeck, Nicole; Way, David P.; Nelson, Richard
2015-01-01
Introduction The oral examination is a traditional method for assessing the developing physician’s medical knowledge, clinical reasoning and interpersonal skills. The typical oral examination is a face-to-face encounter in which examiners quiz examinees on how they would confront a patient case. The advantage of the oral exam is that the examiner can adapt questions to the examinee’s response. The disadvantage is the potential for examiner bias and intimidation. Computer-based virtual simulation technology has been widely used in the gaming industry. We wondered whether virtual simulation could serve as a practical format for delivery of an oral examination. For this project, we compared the attitudes and performance of emergency medicine (EM) residents who took our traditional oral exam to those who took the exam using virtual simulation. Methods EM residents (n=35) were randomized to a traditional oral examination format (n=17) or a simulated virtual examination format (n=18) conducted within an immersive learning environment, Second Life (SL). Proctors scored residents using the American Board of Emergency Medicine oral examination assessment instruments, which included execution of critical actions and ratings on eight competency categories (1–8 scale). Study participants were also surveyed about their oral examination experience. Results We observed no differences between virtual and traditional groups on critical action scores or scores on eight competency categories. However, we noted moderate effect sizes favoring the Second Life group on the clinical competence score. Examinees from both groups thought that their assessment was realistic, fair, objective, and efficient. Examinees from the virtual group reported a preference for the virtual format and felt that the format was less intimidating. Conclusion The virtual simulated oral examination was shown to be a feasible alternative to the traditional oral examination format for assessing EM residents. Virtual environments for oral examinations should continue to be explored, particularly since they offer an inexpensive, more comfortable, yet equally rigorous alternative. PMID:25834684
NASA Technical Reports Server (NTRS)
McClinton, Charles R.; Rausch, Vincent L.; Sitz, Joel; Reukauf, Paul
2001-01-01
This paper provides an overview of the objectives and status of the Hyper-X program, which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7. Extensive risk reduction activities for the first flight are completed, and non-recurring design activities for the Mach 10 X-43 (3rd flight) are nearing completion. The Mach 7 flight of the X-43, in the spring of 2001, will be the first flight of an airframe-integrated scramjet-powered vehicle. The Hyper-X program is continuing to plan follow-on activities to focus an orderly continuation of hypersonic technology development through flight research.
NASA Technical Reports Server (NTRS)
McClinton, Charles R.; Reubush, David E.; Sitz, Joel; Reukauf, Paul
2001-01-01
This paper provides an overview of the objectives and status of the Hyper-X program, which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7. Extensive risk reduction activities for the first flight are completed, and non-recurring design activities for the Mach 10 X-43 (third flight) are nearing completion. The Mach 7 flight of the X-43, in the spring of 2001, will be the first flight of an airframe-integrated scramjet-powered vehicle. The Hyper-X program is continuing to plan follow-on activities to focus an orderly continuation of hypersonic technology development through flight research.
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.
2017-05-01
Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.
On the Role of Hyper-arid Regions within the Virtual Water Trade Network
NASA Astrophysics Data System (ADS)
Aggrey, James; Alshamsi, Aamena; Molini, Annalisa
2016-04-01
Climate change, economic development, and population growth are bound to increasingly impact global water resources, posing a significant threat to the sustainable development of arid regions, where water consumption highly exceeds the natural carrying capacity, population growth rate is high, and climate variability is going to impact both water consumption and availability. Virtual Water Trade (VWT) - i.e. the international trade network of water-intensive products - has been proposed as a possible solution to optimize the allocation of water resources on the global scale. By increasing food availability and lowering food prices it may in fact help the rapid development of water-scarce regions. The structure of the VWT network has been analyzed by a number of authors both in connection with trade policies, socioeconomic constrains and agricultural efficiency. However a systematic analysis of the structure and the dynamics of the VWT network conditional to aridity, climatic forcing and energy availability, is still missing. Our goal is hence to analyze the role of arid and hyper-arid regions within the VWN under diverse climatic, demographic, and energy constraints with an aim to contribute to the ongoing Energy-Water-Food nexus discussion. In particular, we focus on the hyper-arid lands of the Arabian Peninsula, the role they play in the global network and the assessment of their specific criticalities, as reflected in the VWN resilience.
Pereira, Michael; Argelaguet, Ferran; Millán, José Del R; Lécuyer, Anatole
2018-01-01
Competition changes the environment for athletes. The difficulty of training for such stressful events can lead to the well-known effect of "choking" under pressure, which prevents athletes from performing at their best level. To study the effect of competition on the human brain, we recorded pilot electroencephalography (EEG) data while novice shooters were immersed in a realistic virtual environment representing a shooting range. We found a differential between-subject effect of competition on mu (8-12 Hz) oscillatory activity during aiming; compared to training, the more the subject was able to desynchronize his mu rhythm during competition, the better was his shooting performance. Because this differential effect could not be explained by differences in simple measures of the kinematics and muscular activity, nor by the effect of competition or shooting performance per se , we interpret our results as evidence that mu desynchronization has a positive effect on performance during competition.
Pereira, Michael; Argelaguet, Ferran; Millán, José del R.; Lécuyer, Anatole
2018-01-01
Competition changes the environment for athletes. The difficulty of training for such stressful events can lead to the well-known effect of “choking” under pressure, which prevents athletes from performing at their best level. To study the effect of competition on the human brain, we recorded pilot electroencephalography (EEG) data while novice shooters were immersed in a realistic virtual environment representing a shooting range. We found a differential between-subject effect of competition on mu (8–12 Hz) oscillatory activity during aiming; compared to training, the more the subject was able to desynchronize his mu rhythm during competition, the better was his shooting performance. Because this differential effect could not be explained by differences in simple measures of the kinematics and muscular activity, nor by the effect of competition or shooting performance per se, we interpret our results as evidence that mu desynchronization has a positive effect on performance during competition.
Motivating Calculus-Based Kinematics Instruction with Super Mario Bros
NASA Astrophysics Data System (ADS)
Nordine, Jeffrey C.
2011-09-01
High-quality physics instruction is contextualized, motivates students to learn, and represents the discipline as a way of investigating the world rather than as a collection of facts and equations. Inquiry-oriented pedagogy, such as problem-based instruction, holds great promise for both teaching physics content and representing the process of doing real science.2 A challenge for physics teachers is to find instructional contexts that are meaningful, accessible, and motivating for students. Today's students are spending a growing fraction of their lives interacting with virtual environments, and these environments—physically realistic or not—can provide valuable contexts for physics explorations3-5 and lead to thoughtful discussions about decisions that programmers make when designing virtual environments. In this article, I describe a problem-based approach to calculus-based kinematics instruction that contextualizes students' learning within the Super Mario Bros. video game—a game that is more than 20 years old, but still remarkably popular with today's high school and college students.
A Full Body Steerable Wind Display for a Locomotion Interface.
Kulkarni, Sandip D; Fisher, Charles J; Lefler, Price; Desai, Aditya; Chakravarthy, Shanthanu; Pardyjak, Eric R; Minor, Mark A; Hollerbach, John M
2015-10-01
This paper presents the Treadport Active Wind Tunnel (TPAWT)-a full-body immersive virtual environment for the Treadport locomotion interface designed for generating wind on a user from any frontal direction at speeds up to 20 kph. The goal is to simulate the experience of realistic wind while walking in an outdoor virtual environment. A recirculating-type wind tunnel was created around the pre-existing Treadport installation by adding a large fan, ducting, and enclosure walls. Two sheets of air in a non-intrusive design flow along the side screens of the back-projection CAVE-like visual display, where they impinge and mix at the front screen to redirect towards the user in a full-body cross-section. By varying the flow conditions of the air sheets, the direction and speed of wind at the user are controlled. Design challenges to fit the wind tunnel in the pre-existing facility, and to manage turbulence to achieve stable and steerable flow, were overcome. The controller performance for wind speed and direction is demonstrated experimentally.
Virtual reality welder training
NASA Astrophysics Data System (ADS)
White, Steven A.; Reiners, Dirk; Prachyabrued, Mores; Borst, Christoph W.; Chambers, Terrence L.
2010-01-01
This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.
Virtual gaming simulation of a mental health assessment: A usability study.
Verkuyl, Margaret; Romaniuk, Daria; Mastrilli, Paula
2018-05-18
Providing safe and realistic virtual simulations could be an effective way to facilitate the transition from the classroom to clinical practice. As nursing programs begin to include virtual simulations as a learning strategy; it is critical to first assess the technology for ease of use and usefulness. A virtual gaming simulation was developed, and a usability study was conducted to assess its ease of use and usefulness for students and faculty. The Technology Acceptance Model provided the framework for the study, which included expert review and testing by nursing faculty and nursing students. This study highlighted the importance of assessing ease of use and usefulness in a virtual game simulation and provided feedback for the development of an effective virtual gaming simulation. The study participants said the virtual gaming simulation was engaging, realistic and similar to a clinical experience. Participants found the game easy to use and useful. Testing provided the development team with ideas to improve the user interface. The usability methodology provided is a replicable approach to testing virtual experiences before a research study or before implementing virtual experiences into curriculum. Copyright © 2018 Elsevier Ltd. All rights reserved.
Taglieri, Catherine A; Crosby, Steven J; Zimmerman, Kristin; Schneider, Tulip; Patel, Dhiren K
2017-06-01
Objective. To assess the effect of incorporating virtual patient activities in a pharmacy skills lab on student competence and confidence when conducting real-time comprehensive clinic visits with mock patients. Methods. Students were randomly assigned to a control or intervention group. The control group completed the clinic visit prior to completing virtual patient activities. The intervention group completed the virtual patient activities prior to the clinic visit. Student proficiency was evaluated in the mock lab. All students completed additional exercises with the virtual patient and were subsequently assessed. Student impressions were assessed via a pre- and post-experience survey. Results. Student performance conducting clinic visits was higher in the intervention group compared to the control group. Overall student performance continued to improve in the subsequent module. There was no change in student confidence from pre- to post-experience. Student rating of the ease of use and realistic simulation of the virtual patient increased; however, student rating of the helpfulness of the virtual patient decreased. Despite student rating of the helpfulness of the virtual patient program, student performance improved. Conclusion. Virtual patient activities enhanced student performance during mock clinic visits. Students felt the virtual patient realistically simulated a real patient. Virtual patients may provide additional learning opportunities for students.
Virtual endoscopy in neurosurgery: a review.
Neubauer, André; Wolfsberger, Stefan
2013-01-01
Virtual endoscopy is the computerized creation of images depicting the inside of patient anatomy reconstructed in a virtual reality environment. It permits interactive, noninvasive, 3-dimensional visual inspection of anatomical cavities or vessels. This can aid in diagnostics, potentially replacing an actual endoscopic procedure, and help in the preparation of a surgical intervention by bridging the gap between plain 2-dimensional radiologic images and the 3-dimensional depiction of anatomy during actual endoscopy. If not only the endoscopic vision but also endoscopic handling, including realistic haptic feedback, is simulated, virtual endoscopy can be an effective training tool for novice surgeons. In neurosurgery, the main fields of the application of virtual endoscopy are third ventriculostomy, endonasal surgery, and the evaluation of pathologies in cerebral blood vessels. Progress in this very active field of research is achieved through cooperation between the technical and the medical communities. While the technology advances and new methods for modeling, reconstruction, and simulation are being developed, clinicians evaluate existing simulators, steer the development of new ones, and explore new fields of application. This review introduces some of the most interesting virtual reality systems for endoscopic neurosurgery developed in recent years and presents clinical studies conducted either on areas of application or specific systems. In addition, benefits and limitations of single products and simulated neuroendoscopy in general are pointed out.
NASA Astrophysics Data System (ADS)
Krum, David M.; Sadek, Ramy; Kohli, Luv; Olson, Logan; Bolas, Mark
2010-01-01
As part of the Institute for Creative Technologies and the School of Cinematic Arts at the University of Southern California, the Mixed Reality lab develops technologies and techniques for presenting realistic immersive training experiences. Such experiences typically place users within a complex ecology of social actors, physical objects, and collections of intents, motivations, relationships, and other psychological constructs. Currently, it remains infeasible to completely synthesize the interactivity and sensory signatures of such ecologies. For this reason, the lab advocates mixed reality methods for training and conducts experiments exploring such methods. Currently, the lab focuses on understanding and exploiting the elasticity of human perception with respect to representational differences between real and virtual environments. This paper presents an overview of three projects: techniques for redirected walking, displays for the representation of virtual humans, and audio processing to increase stress.
2016-08-01
In the United States, exposure to media violence is becoming an inescapable component of children's lives. With the rise in new technologies, such as tablets and new gaming platforms, children and adolescents increasingly are exposed to what is known as "virtual violence." This form of violence is not experienced physically; rather, it is experienced in realistic ways via new technology and ever more intense and realistic games. The American Academy of Pediatrics continues to be concerned about children's exposure to virtual violence and the effect it has on their overall health and well-being. This policy statement aims to summarize the current state of scientific knowledge regarding the effects of virtual violence on children's attitudes and behaviors and to make specific recommendations for pediatricians, parents, industry, and policy makers. Copyright © 2016 by the American Academy of Pediatrics.
ERIC Educational Resources Information Center
Honebein, Peter C.; Goldsworthy, Richard
2012-01-01
Virtual classrooms and virtual activities have waxed and waned, with most focusing on fostering learning in the cognitive domain and, realistically, most becoming rapidly discontinued. But social virtual realities (SVR) are uniquely "social," so what about interpersonal skills? This article describes the authors' experiences exploring SVR as a…
Plancher, Gaën; Gyselinck, Valérie; Piolino, Pascale
2018-01-01
Memory is one of the most important cognitive functions in a person's life as it is essential for recalling personal memories and performing many everyday tasks. Although a huge number of studies have been conducted in the field, only a few of them investigated memory in realistic situations, due to methodological issues. The various tools that have been developed using virtual environments (VEs) have gained popularity in cognitive psychology and neuropsychology because they enable to create naturalistic and controlled situations, and are thus particularly adapted to the study of episodic memory (EM), for which an ecological evaluation is of prime importance. EM is the conscious recollection of personal events combined with their phenomenological and spatiotemporal encoding contexts. Using an original paradigm in a VE, the objective of the present study was to characterize the construction of episodic memories. While the concept of working memory has become central in the understanding of a wide range of cognitive functions, its role in the integration of episodic memories has seldom been assessed in an ecological context. This experiment aimed at filling this gap by studying how EM is affected by concurrent tasks requiring working memory resources in a realistic situation. Participants navigated in a virtual town and had to memorize as many elements in their spatiotemporal context as they could. During learning, participants had either to perform a concurrent task meant to prevent maintenance through the phonological loop, or a task aimed at preventing maintenance through the visuospatial sketchpad, or no concurrent task. EM was assessed in a recall test performed after learning through various scores measuring the what, where and when of the memories. Results showed that, compared to the control condition with no concurrent task, the prevention of maintenance through the phonological loop had a deleterious impact only on the encoding of central elements. By contrast, the prevention of visuo-spatial maintenance interfered both with the encoding of the temporal context and with the binding. These results suggest that the integration of realistic episodic memories relies on different working memory processes that depend on the nature of the traces.
Lopez Maïté, C; Gaétane, D; Axel, C
2016-01-01
The ability to perform two tasks simultaneously has become increasingly important as attention-demanding technologies have become more common in daily life. This type of attentional resources allocation is commonly called "divided attention". Because of the importance of divided attention in natural world settings, substantial efforts have been made recently so as to promote an integrated, realistic assessment of functional abilities in dual-task paradigms. In this context, virtual reality methods appear to be a good solution. However to date, there has been little discussion on validity of such methods. Here, we offer a comparative review of conventional tools used to assess divided attention and of the first virtual reality studies (mostly from the field of road and pedestrian safety). The ecological character of virtual environments leads to a better understanding of the influence of dual-task settings and also makes it possible to clarify issues such as the utility of hands-free phones. After discussing the theoretical and clinical contributions of these studies, we discuss the limits of virtual reality assessment, focusing in particular: (i) on the challenges associated with lack of familiarity with new technological devices; (ii) on the validity of the ecological character of virtual environments; and (iii) on the question of whether the results obtained in a specific context can be generalized to all dual-task situations typical of daily life. To overcome the limitations associated with virtual reality, we propose: (i) to include a standardized familiarization phase in assessment protocols so as to limit the interference caused by the use of new technologies; (ii) to systematically compare virtual reality performance with conventional tests or real-life tests; and (iii) to design dual-task scenarios that are independent from the patient's expertise on one of the two tasks. We conclude that virtual reality appears to constitute a useful tool when used in combination with more conventional tests. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Simple force feedback for small virtual environments
NASA Astrophysics Data System (ADS)
Schiefele, Jens; Albert, Oliver; van Lier, Volker; Huschka, Carsten
1998-08-01
In today's civil flight training simulators only the cockpit and all its interaction devices exist as physical mockups. All other elements such as flight behavior, motion, sound, and the visual system are virtual. As an extension to this approach `Virtual Flight Simulation' tries to subsidize the cockpit mockup by a 3D computer generated image. The complete cockpit including the exterior view is displayed on a Head Mounted Display (HMD), a BOOM, or a Cave Animated Virtual Environment. In most applications a dataglove or virtual pointers are used as input devices. A basic problem of such a Virtual Cockpit simulation is missing force feedback. A pilot cannot touch and feel buttons, knobs, dials, etc. he tries to manipulate. As a result, it is very difficult to generate realistic inputs into VC systems. `Seating Bucks' are used in automotive industry to overcome the problem of missing force feedback. Only a seat, steering wheel, pedal, stick shift, and radio panel are physically available. All other geometry is virtual and therefore untouchable but visible in the output device. In extension to this concept a `Seating Buck' for commercial transport aircraft cockpits was developed. Pilot seat, side stick, pedals, thrust-levers, and flaps lever are physically available. All other panels are simulated by simple flat plastic panels. They are located at the same location as their real counterparts only lacking the real input devices. A pilot sees the entire photorealistic cockpit in a HMD as 3D geometry but can only touch the physical parts and plastic panels. In order to determine task performance with the developed Seating Buck, a test series was conducted. Users press buttons, adapt dials, and turn knobs. In a first test, a complete virtual environment was used. The second setting had a plastic panel replacing all input devices. Finally, as cross reference the participants had to repeat the test with a complete physical mockup of the input devices. All panels and physical devices can be easily relocated to simulate a different type of cockpit. Maximal 30 minutes are needed for a complete adaptation. So far, an Airbus A340 and a generic cockpit are supported.
Kinematic evaluation of virtual walking trajectories.
Cirio, Gabriel; Olivier, Anne-Hélène; Marchal, Maud; Pettré, Julien
2013-04-01
Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.
Eyeblink Classical Conditioning and Post-Traumatic Stress Disorder – A Model Systems Approach
Schreurs, Bernard G.; Burhans, Lauren B.
2015-01-01
Not everyone exposed to trauma suffers flashbacks, bad dreams, numbing, fear, anxiety, sleeplessness, hyper-vigilance, hyperarousal, or an inability to cope, but those who do may suffer from post-traumatic stress disorder (PTSD). PTSD is a major physical and mental health problem for military personnel and civilians exposed to trauma. There is still debate about the incidence and prevalence of PTSD especially among the military, but for those who are diagnosed, behavioral therapy and drug treatment strategies have proven to be less than effective. A number of these treatment strategies are based on rodent fear conditioning research and are capable of treating only some of the symptoms because the extinction of fear does not deal with the various forms of hyper-vigilance and hyperarousal experienced by people with PTSD. To help address this problem, we have developed a preclinical eyeblink classical conditioning model of PTSD in which conditioning and hyperarousal can both be extinguished. We review this model and discuss findings showing that unpaired stimulus presentations can be effective in reducing levels of conditioning and hyperarousal even when unconditioned stimulus intensity is reduced to the point where it is barely capable of eliciting a response. These procedures have direct implications for the treatment of PTSD and could be implemented in a virtual reality environment. PMID:25904874
Eyeblink classical conditioning and post-traumatic stress disorder - a model systems approach.
Schreurs, Bernard G; Burhans, Lauren B
2015-01-01
Not everyone exposed to trauma suffers flashbacks, bad dreams, numbing, fear, anxiety, sleeplessness, hyper-vigilance, hyperarousal, or an inability to cope, but those who do may suffer from post-traumatic stress disorder (PTSD). PTSD is a major physical and mental health problem for military personnel and civilians exposed to trauma. There is still debate about the incidence and prevalence of PTSD especially among the military, but for those who are diagnosed, behavioral therapy and drug treatment strategies have proven to be less than effective. A number of these treatment strategies are based on rodent fear conditioning research and are capable of treating only some of the symptoms because the extinction of fear does not deal with the various forms of hyper-vigilance and hyperarousal experienced by people with PTSD. To help address this problem, we have developed a preclinical eyeblink classical conditioning model of PTSD in which conditioning and hyperarousal can both be extinguished. We review this model and discuss findings showing that unpaired stimulus presentations can be effective in reducing levels of conditioning and hyperarousal even when unconditioned stimulus intensity is reduced to the point where it is barely capable of eliciting a response. These procedures have direct implications for the treatment of PTSD and could be implemented in a virtual reality environment.
Biomechanical Analysis of Locust Jumping in a Physically Realistic Virtual Environment
NASA Astrophysics Data System (ADS)
Cofer, David; Cymbalyuk, Gennady; Heitler, William; Edwards, Donald
2008-03-01
The biomechanical and neural components that underlie locust jumping have been extensively studied. Previous research suggested that jump energy is stored primarily in the extensor apodeme, and in a band of cuticle called the semi-lunar process (SLP). As it has thus far proven impossible to experimentally alter the SLP without rendering a locust unable to jump, it has not been possible to test whether the energy stored in the SLP has a significant impact on the jump. To address problems such as this we have developed a software toolkit, AnimatLab, which allows researchers to build and test virtual organisms. We used this software to build a virtual locust, and then asked how the SLP is utilized during jumping. The results show that without the SLP the jump distance was reduced by almost half. Further, the simulations were also able to show that loss of the SLP had a significant impact on the final phase of the jump. We are currently working on postural control mechanisms for targeted jumping in locust.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
The development of a collaborative virtual environment for finite element simulation
NASA Astrophysics Data System (ADS)
Abdul-Jalil, Mohamad Kasim
Communication between geographically distributed designers has been a major hurdle in traditional engineering design. Conventional methods of communication, such as video conferencing, telephone, and email, are less efficient especially when dealing with complex design models. Complex shapes, intricate features and hidden parts are often difficult to describe verbally or even using traditional 2-D or 3-D visual representations. Virtual Reality (VR) and Internet technologies have provided a substantial potential to bridge the present communication barrier. VR technology allows designers to immerse themselves in a virtual environment to view and manipulate this model just as in real-life. Fast Internet connectivity has enabled fast data transfer between remote locations. Although various collaborative virtual environment (CVE) systems have been developed in the past decade, they are limited to high-end technology that is not accessible to typical designers. The objective of this dissertation is to discover and develop a new approach to increase the efficiency of the design process, particularly for large-scale applications wherein participants are geographically distributed. A multi-platform and easily accessible collaborative virtual environment (CVRoom), is developed to accomplish the stated research objective. Geographically dispersed designers can meet in a single shared virtual environment to discuss issues pertaining to the engineering design process and to make trade-off decisions more quickly than before, thereby speeding the entire process. This 'faster' design process will be achieved through the development of capabilities to better enable the multidisciplinary and modeling the trade-off decisions that are so critical before launching into a formal detailed design. The features of the environment developed as a result of this research include the ability to view design models, use voice interaction, and to link engineering analysis modules (such as Finite Element Analysis module, such as is demonstrated in this work). One of the major issues in developing a CVE system for engineering design purposes is to obtain any pertinent simulation results in real-time. This is critical so that the designers can make decisions based on these results quickly. For example, in a finite element analysis, if a design model is changed or perturbed, the analysis results must be obtained in real-time or near real-time to make the virtual meeting environment realistic. In this research, the finite difference-based Design Sensitivity Analysis (DSA) approach is employed to approximate structural responses (i.e. stress, displacement, etc), so as to demonstrate the applicability of CVRoom for engineering design trade-offs. This DSA approach provides for fast approximation and is well-suited for the virtual meeting environment where fast response time is required. The DSA-based approach is tested on several example test problems to show its applicability and limitations. This dissertation demonstrates that an increase in efficiency and reduction of time required for a complex design processing can be accomplished using the approach developed in this dissertation research. Several implementations of CVRoom by students working on common design tasks were investigated. All participants confirmed the preference of using the collaborative virtual environment developed in this dissertation work (CVRoom) over other modes of interactions. It is proposed here that CVRoom is representative of the type of collaborative virtual environment that will be used by most designers in the future to reduce the time required in a design cycle and thereby reduce the associated cost.
Stereoscopic augmented reality with pseudo-realistic global illumination effects
NASA Astrophysics Data System (ADS)
de Sorbier, Francois; Saito, Hideo
2014-03-01
Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.
Exploiting Textured 3D Models for Developing Serious Games
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Georgopoulos, A.
2015-08-01
Digital technologies have affected significantly many fields of computer graphics such as Games and especially the field of the Serious Games. These games are usually used for educational proposes in many fields such as Health Care, Military applications, Education, Government etc. Especially Digital Cultural Heritage is a scientific area that Serious Games are applied and lately many applications appear in the related literature. Realistic 3D textured models which have been produced using different photogrammetric methods could be a useful tool for the creation of Serious Game applications in order to make the final result more realistic and close to the reality. The basic goal of this paper is how 3D textured models which are produced by photogrammetric methods can be useful for developing a more realistic environment of a Serious Game. The application of this project aims at the creation of an educational game for the Ancient Agora of Athens. The 3D models used vary not only as far as their production methods (i.e. Time of Flight laser scanner, Structure from Motion, Virtual historical reconstruction etc.) is concerned, but also as far as their era as some of them illustrated according to their existing situation and some others according to how these monuments looked like in the past. The Unity 3D® game developing environment was used for creating this application, in which all these models were inserted in the same file format. For the application two diachronic virtual tours of the Athenian Agora were produced. The first one illustrates the Agora as it is today and the second one at the 2nd century A.D. Finally the future perspective for the evolution of this game is presented which includes the addition of some questions that the user will be able to answer. Finally an evaluation is scheduled to be performed at the end of the project.
ERIC Educational Resources Information Center
Santoro, Marina; Mazzotti, Marco
2006-01-01
Hyper-TVT is a computer-aided education system that has been developed at the Institute of Process Engineering at the ETH Zurich. The aim was to create an interactive learning environment for chemical and process engineering students. The topics covered are the most important multistage separation processes, i.e. fundamentals of separation…
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen
2002-02-01
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.
Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang
2007-01-01
Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.
NASA Astrophysics Data System (ADS)
Chambelland, Jean-Christophe; Gesquière, Gilles
2012-03-01
Due to the advances in computer graphics and network speed it is possible to navigate in 3D virtual world in real time. This technology proposed for example in computer games, has been adapted for training systems. In this context, a collaborative serious game for urban crisis management called SIMFOR is born in France. This project has been designed for intensive realistic training and consequently must allow the players to create new urban operational theatres. In this goal, importing, structuring, processing and exchanging 3D urban data remains an important underlying problem. This communication will focus on the design of the 3D Environment Editor (EE) and the related data processes needed to prepare the data flow to be exploitable by the runtime environment of SIMFOR. We will use solutions proposed by the Open Geospatial Consortium (OGC) to aggregate and share data. A presentation of the proposed architecture will be given. The overall design of the EE and some strategies for efficiently analyzing, displaying and exporting large amount of urban CityGML information will be presented. An example illustrating the potentiality of the EE and the reliability of the proposed data processing will be proposed.
Transforming the Classroom for Collaborative Learning in the 21st Century
ERIC Educational Resources Information Center
Christen, Amy
2009-01-01
Today's hyper-connected students live in a world of instant interpersonal communications and virtually infinite access to information and educational resources. But this networked world, and the powerful learning tools it offers, has yet to penetrate the typical classroom. In many ways educational institutions are spinning their curricular wheels,…
Evidence of Virtual Patients as a Facilitative Learning Tool on an Anesthesia Course
ERIC Educational Resources Information Center
Leung, Joseph Y. C.; Critchley, Lester A. H.; Yung, Alex L. K.; Kumta, Shekhar M.
2015-01-01
Virtual patients are computerised representations of realistic clinical cases. They were developed to teach clinical reasoning skills through delivery of multiple standardized patient cases. The anesthesia course at The Chinese University of Hong Kong developed two novel types of virtual patients, formative assessment cases studies and storyline,…
Resting Energy Expenditure of Rats Acclimated to Hyper-Gravity
NASA Technical Reports Server (NTRS)
Wade, Charles E.; Moran, Megan M.; Oyama, Jiro; Schwenke, David; Dalton, Bonnie P. (Technical Monitor)
2000-01-01
To determine the influence of body mass and age on resting energy expenditure (EE) following acclimation to hyper-gravity, oxygen consumption (VO2) and carbon dioxide production (VCO2) were measured to calculate resting energy expenditure (EE), in male rats, ages 40 to 400 days, acclimated to 1.23 or 4.1 G for a minimum of two weeks. Animals were maintained on a centrifuge to produce the hyper-gravity environment. Measurements were made over three hours in hyper-gravity during the period when the lights were on, the inactive period of rats. In rats matched for body mass (approximately 400 g) hyper-gravity increased VO2 by 18% and VCO2 by 27% compared to controls, resulting in an increase in RER, 0.80 to 0.87. There were increases in resting EE with an increase in gravity. This increase was greater when the mass of the rat was larger. Rating EE for 400g animals were increased from 47 +/- 1 kcal/kg/day at 1 G, to 57 +/- 1.5 and 5.8 +/- 2.2 kcal/kg/day at 2,3 and 4.1 G, respectively. There was no difference between the two hyper-gravity environments. When differences in age of the animals were accounted for, the increase in resting EE adjusted for body mass was increased by over 36% in older animals due to exposure to hyper-gravity. Acclimation to hyper-gravity increases the resting EE of rats, dependent upon body mass and age, and appears to alter substrate metabolism. Increasing the level of hyper-gravity, from 2.3 to 4.1 G, produced no further changes raising questions as to a dose effect of gravity level on resting metabolism.
Zary, Nabil; Johnson, Gunilla; Boberg, Jonas; Fors, Uno GH
2006-01-01
Background The Web-based Simulation of Patients (Web-SP) project was initiated in order to facilitate the use of realistic and interactive virtual patients (VP) in medicine and healthcare education. Web-SP focuses on moving beyond the technology savvy teachers, when integrating simulation-based education into health sciences curricula, by making the creation and use of virtual patients easier. The project strives to provide a common generic platform for design/creation, management, evaluation and sharing of web-based virtual patients. The aim of this study was to evaluate if it was possible to develop a web-based virtual patient case simulation environment where the entire case authoring process might be handled by teachers and which would be flexible enough to be used in different healthcare disciplines. Results The Web-SP system was constructed to support easy authoring, management and presentation of virtual patient cases. The case authoring environment was found to facilitate for teachers to create full-fledged patient cases without the assistance of computer specialists. Web-SP was successfully implemented at several universities by taking into account key factors such as cost, access, security, scalability and flexibility. Pilot evaluations in medical, dentistry and pharmacy courses shows that students regarded Web-SP as easy to use, engaging and to be of educational value. Cases adapted for all three disciplines were judged to be of significant educational value by the course leaders. Conclusion The Web-SP system seems to fulfil the aim of providing a common generic platform for creation, management and evaluation of web-based virtual patient cases. The responses regarding the authoring environment indicated that the system might be user-friendly enough to appeal to a majority of the academic staff. In terms of implementation strengths, Web-SP seems to fulfil most needs from course directors and teachers from various educational institutions and disciplines. The system is currently in use or under implementation in several healthcare disciplines at more than ten universities worldwide. Future aims include structuring the exchange of cases between teachers and academic institutions by building a VP library function. We intend to follow up the positive results presented in this paper with other studies looking at the learning outcomes, critical thinking and patient management. Studying the potential of Web-SP as an assessment tool will also be performed. More information about Web-SP: PMID:16504041
Zary, Nabil; Johnson, Gunilla; Boberg, Jonas; Fors, Uno G H
2006-02-21
The Web-based Simulation of Patients (Web-SP) project was initiated in order to facilitate the use of realistic and interactive virtual patients (VP) in medicine and healthcare education. Web-SP focuses on moving beyond the technology savvy teachers, when integrating simulation-based education into health sciences curricula, by making the creation and use of virtual patients easier. The project strives to provide a common generic platform for design/creation, management, evaluation and sharing of web-based virtual patients. The aim of this study was to evaluate if it was possible to develop a web-based virtual patient case simulation environment where the entire case authoring process might be handled by teachers and which would be flexible enough to be used in different healthcare disciplines. The Web-SP system was constructed to support easy authoring, management and presentation of virtual patient cases. The case authoring environment was found to facilitate for teachers to create full-fledged patient cases without the assistance of computer specialists. Web-SP was successfully implemented at several universities by taking into account key factors such as cost, access, security, scalability and flexibility. Pilot evaluations in medical, dentistry and pharmacy courses shows that students regarded Web-SP as easy to use, engaging and to be of educational value. Cases adapted for all three disciplines were judged to be of significant educational value by the course leaders. The Web-SP system seems to fulfil the aim of providing a common generic platform for creation, management and evaluation of web-based virtual patient cases. The responses regarding the authoring environment indicated that the system might be user-friendly enough to appeal to a majority of the academic staff. In terms of implementation strengths, Web-SP seems to fulfil most needs from course directors and teachers from various educational institutions and disciplines. The system is currently in use or under implementation in several healthcare disciplines at more than ten universities worldwide. Future aims include structuring the exchange of cases between teachers and academic institutions by building a VP library function. We intend to follow up the positive results presented in this paper with other studies looking at the learning outcomes, critical thinking and patient management. Studying the potential of Web-SP as an assessment tool will also be performed. More information about Web-SP: http://websp.lime.ki.se.
[Application of hyper-spectral remote sensing technology in environmental protection].
Zhao, Shao-Hua; Zhang, Feng; Wang, Qiao; Yao, Yun-Jun; Wang, Zhong-Ting; You, Dai-An
2013-12-01
Hyper-spectral remote sensing (RS) technology has been widely used in environmental protection. The present work introduces its recent application in the RS monitoring of pollution gas, green-house gas, algal bloom, water quality of catch water environment, safety of drinking water sources, biodiversity, vegetation classification, soil pollution, and so on. Finally, issues such as scarce hyper-spectral satellites, the limits of data processing and information extract are related. Some proposals are also presented, including developing subsequent satellites of HJ-1 satellite with differential optical absorption spectroscopy, greenhouse gas spectroscopy and hyper-spectral imager, strengthening the study of hyper-spectral data processing and information extraction, and promoting the construction of environmental application system.
Virtual reality hardware and graphic display options for brain-machine interfaces
Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.
2009-01-01
Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069
Recognition profile of emotions in natural and virtual faces.
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Ruben C; Gur, Rurben C; Mathiak, Klaus
2008-01-01
Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.
Recognition Profile of Emotions in Natural and Virtual Faces
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus
2008-01-01
Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152
Virtual-reality-based educational laboratories in fiber optic engineering
NASA Astrophysics Data System (ADS)
Hayes, Dana; Turczynski, Craig; Rice, Jonny; Kozhevnikov, Michael
2014-07-01
Researchers and educators have observed great potential in virtual reality (VR) technology as an educational tool due to its ability to engage and spark interest in students, thus providing them with a deeper form of knowledge about a subject. The focus of this project is to develop an interactive VR educational module, Laser Diode Characteristics and Coupling to Fibers, to integrate into a fiber optics laboratory course. The developed module features a virtual laboratory populated with realistic models of optical devices in which students can set up and perform an optical experiment dealing with laser diode characteristics and fiber coupling. The module contains three increasingly complex levels for students to navigate through, with a short built-in quiz after each level to measure the student's understanding of the subject. Seventeen undergraduate students learned fiber coupling concepts using the designed computer simulation in a non-immersive desktop virtual environment (VE) condition. The analysis of students' responses on the updated pre- and post tests show statistically significant improvement of the scores for the post-test as compared to the pre-test. In addition, the students' survey responses suggest that they found the module very useful and engaging. The conducted study clearly demonstrated the feasibility of the proposed instructional technology for engineering education, where both the model of instruction and the enabling technology are equally important, in providing a better learning environment to improve students' conceptual understanding as compared to other instructional approaches.
Virtual Reality in Neurointervention.
Ong, Chin Siang; Deib, Gerard; Yesantharao, Pooja; Qiao, Ye; Pakpoor, Jina; Hibino, Narutoshi; Hui, Ferdinand; Garcia, Juan R
2018-06-01
Virtual reality (VR) allows users to experience realistic, immersive 3D virtual environments with the depth perception and binocular field of view of real 3D settings. Newer VR technology has now allowed for interaction with 3D objects within these virtual environments through the use of VR controllers. This technical note describes our preliminary experience with VR as an adjunct tool to traditional angiographic imaging in the preprocedural workup of a patient with a complex pseudoaneurysm. Angiographic MRI data was imported and segmented to create 3D meshes of bilateral carotid vasculature. The 3D meshes were then projected into VR space, allowing the operator to inspect the carotid vasculature using a 3D VR headset as well as interact with the pseudoaneurysm (handling, rotation, magnification, and sectioning) using two VR controllers. 3D segmentation of a complex pseudoaneurysm in the distal cervical segment of the right internal carotid artery was successfully performed and projected into VR. Conventional and VR visualization modes were equally effective in identifying and classifying the pathology. VR visualization allowed the operators to manipulate the dataset to achieve a greater understanding of the anatomy of the parent vessel, the angioarchitecture of the pseudoaneurysm, and the surface contours of all visualized structures. This preliminary study demonstrates the feasibility of utilizing VR for preprocedural evaluation in patients with anatomically complex neurovascular disorders. This novel visualization approach may serve as a valuable adjunct tool in deciding patient-specific treatment plans and selection of devices prior to intervention.
Virtual Laparoscopic Training System Based on VCH Model.
Tang, Jiangzhou; Xu, Lang; He, Longjun; Guan, Songluan; Ming, Xing; Liu, Qian
2017-04-01
Laparoscopy has been widely used to perform abdominal surgeries, as it is advantageous in that the patients experience lower post-surgical trauma, shorter convalescence, and less pain as compared to traditional surgery. Laparoscopic surgeries require precision; therefore, it is imperative to train surgeons to reduce the risk of operation. Laparoscopic simulators offer a highly realistic surgical environment by using virtual reality technology, and it can improve the training efficiency of laparoscopic surgery. This paper presents a virtual Laparoscopic surgery system. The proposed system utilizes the Visible Chinese Human (VCH) to construct the virtual models and simulates real-time deformation with both improved special mass-spring model and morph target animation. Meanwhile, an external device that integrates two five-degrees-of-freedom (5-DOF) manipulators was designed and made to interact with the virtual system. In addition, the proposed system provides a modular tool based on Unity3D to define the functions and features of instruments and organs, which could help users to build surgical training scenarios quickly. The proposed virtual laparoscopic training system offers two kinds of training mode, skills training and surgery training. In the skills training mode, the surgeons are mainly trained for basic operations, such as laparoscopic camera, needle, grasp, electric coagulation, and suturing. In the surgery-training mode, the surgeons can practice cholecystectomy and removal of hepatic cysts by guided or non-guided teaching.
Visualized modeling platform for virtual plant growth and monitoring on the internet
NASA Astrophysics Data System (ADS)
Zhou, De-fu; Tian, Feng-qui; Ren, Ping
2009-07-01
Virtual plant growth is a key research topic in Agriculture Information Technique and Computer Graphics. It has been applied in botany, agronomy, environmental sciences, computre sciences and applied mathematics. Modeling leaf color dynamics in plant is of significant importance for realizing virtual plant growth. Using systematic analysis method and dynamic modeling technology, a SPAD-based leaf color dynamic model was developed to simulate time-course change characters of leaf SPAD on the plant. In addition, process of plant growth can be computer-stimulated using Virtual Reality Modeling Language (VRML) to establish a vivid and visible model, including shooting, rooting, blooming, as well as growth of the stems and leaves. In the resistance environment, e.g., lacking of water, air or nutrient substances, high salt or alkaline, freezing injury, high temperature, suffering from diseases and insect pests, the changes from the level of whole plant to organs, tissues and cells could be computer-stimulated. Changes from physiological and biochemistry could also be described. When a series of indexes were input by the costumers, direct view and microcosmic changes could be shown. Thus, the model has a good performance in predicting growth condition of the plant, laying a foundation for further constructing virtual plant growth system. The results revealed that realistic physiological and pathological processes of 3D virtual plants could be demonstrated by proper design and effectively realized in the internet.
WeaVR: a self-contained and wearable immersive virtual environment simulation system.
Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James
2015-03-01
We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.
Affective Interaction with a Virtual Character Through an fNIRS Brain-Computer Interface.
Aranyi, Gabor; Pecune, Florian; Charles, Fred; Pelachaud, Catherine; Cavazza, Marc
2016-01-01
Affective brain-computer interfaces (BCI) harness Neuroscience knowledge to develop affective interaction from first principles. In this article, we explore affective engagement with a virtual agent through Neurofeedback (NF). We report an experiment where subjects engage with a virtual agent by expressing positive attitudes towards her under a NF paradigm. We use for affective input the asymmetric activity in the dorsolateral prefrontal cortex (DL-PFC), which has been previously found to be related to the high-level affective-motivational dimension of approach/avoidance. The magnitude of left-asymmetric DL-PFC activity, measured using functional near infrared spectroscopy (fNIRS) and treated as a proxy for approach, is mapped onto a control mechanism for the virtual agent's facial expressions, in which action units (AUs) are activated through a neural network. We carried out an experiment with 18 subjects, which demonstrated that subjects are able to successfully engage with the virtual agent by controlling their mental disposition through NF, and that they perceived the agent's responses as realistic and consistent with their projected mental disposition. This interaction paradigm is particularly relevant in the case of affective BCI as it facilitates the volitional activation of specific areas normally not under conscious control. Overall, our contribution reconciles a model of affect derived from brain metabolic data with an ecologically valid, yet computationally controllable, virtual affective communication environment.
Real-time mandibular angle reduction surgical simulation with haptic rendering.
Wang, Qiong; Chen, Hui; Wu, Wen; Jin, Hai-Yang; Heng, Pheng-Ann
2012-11-01
Mandibular angle reduction is a popular and efficient procedure widely used to alter the facial contour. The primary surgical instruments, the reciprocating saw and the round burr, employed in the surgery have a common feature: operating at a high-speed. Generally, inexperienced surgeons need a long-time practice to learn how to minimize the risks caused by the uncontrolled contacts and cutting motions in manipulation of instruments with high-speed reciprocation or rotation. A virtual reality-based surgical simulator for the mandibular angle reduction was designed and implemented on a CUDA-based platform in this paper. High-fidelity visual and haptic feedbacks are provided to enhance the perception in a realistic virtual surgical environment. The impulse-based haptic models were employed to simulate the contact forces and torques on the instruments. It provides convincing haptic sensation for surgeons to control the instruments under different reciprocation or rotation velocities. The real-time methods for bone removal and reconstruction during surgical procedures have been proposed to support realistic visual feedbacks. The simulated contact forces were verified by comparing against the actual force data measured through the constructed mechanical platform. An empirical study based on the patient-specific data was conducted to evaluate the ability of the proposed system in training surgeons with various experiences. The results confirm the validity of our simulator.
The development and evaluation of a medical imaging training immersive environment
Bridge, Pete; Gunn, Therese; Kastanis, Lazaros; Pack, Darren; Rowntree, Pamela; Starkey, Debbie; Mahoney, Gaynor; Berry, Clare; Braithwaite, Vicki; Wilson-Stewart, Kelly
2014-01-01
Introduction A novel realistic 3D virtual reality (VR) application has been developed to allow medical imaging students at Queensland University of Technology to practice radiographic techniques independently outside the usual radiography laboratory. Methods A flexible agile development methodology was used to create the software rapidly and effectively. A 3D gaming environment and realistic models were used to engender presence in the software while tutor-determined gold standards enabled students to compare their performance and learn in a problem-based learning pedagogy. Results Students reported high levels of satisfaction and perceived value and the software enabled up to 40 concurrent users to prepare for clinical practice. Student feedback also indicated that they found 3D to be of limited value in the desktop version compared to the usual 2D approach. A randomised comparison between groups receiving software-based and traditional practice measured performance in a formative role play with real equipment. The results of this work indicated superior performance with the equipment for the VR trained students (P = 0.0366) and confirmed the value of VR for enhancing 3D equipment-based problem-solving skills. Conclusions Students practising projection techniques virtually performed better at role play assessments than students practising in a traditional radiography laboratory only. The application particularly helped with 3D equipment configuration, suggesting that teaching 3D problem solving is an ideal use of such medical equipment simulators. Ongoing development work aims to establish the role of VR software in preparing students for clinical practice with a range of medical imaging equipment. PMID:26229652
ERIC Educational Resources Information Center
Muhlberger, Andreas; Bulthoff, Heinrich H.; Wiedemann, Georg; Pauli, Paul
2007-01-01
An overall assessment of phobic fear requires not only a verbal self-report of fear but also an assessment of behavioral and physiological responses. Virtual reality can be used to simulate realistic (phobic) situations and therefore should be useful for inducing emotions in a controlled, standardized way. Verbal and physiological fear reactions…
The Virtual Genetics Lab II: Improvements to a Freely Available Software Simulation of Genetics
ERIC Educational Resources Information Center
White, Brian T.
2012-01-01
The Virtual Genetics Lab II (VGLII) is an improved version of the highly successful genetics simulation software, the Virtual Genetics Lab (VGL). The software allows students to use the techniques of genetic analysis to design crosses and interpret data to solve realistic genetics problems involving a hypothetical diploid insect. This is a brief…
de Tommaso, Marina; Ricci, Katia; Delussi, Marianna; Montemurno, Anna; Vecchio, Eleonora; Brunetti, Antonio; Bevilacqua, Vitoantonio
2016-01-01
We propose a virtual reality (VR) model, reproducing a house environment, where color modification of target places, obtainable by home automation in a real ambient, was tested by means of a P3b paradigm. The target place (bathroom door) was designed to be recognized during a virtual wayfinding in a realistic reproduction of a house environment. Different color and luminous conditions, easily obtained in the real ambient from a remote home automation control, were applied to the target and standard places, all the doors being illuminated in white (W), and only target doors colored with a green (G) or red (R) spotlight. Three different Virtual Environments (VE) were depicted, as the bathroom was designed in the aisle (A), living room (L) and bedroom (B). EEG was recorded from 57 scalp electrodes in 10 healthy subjects in the 60-80 year age range (O-old group) and 12 normal cases in the 20-30 year age range (Y-young group). In Young group, all the target stimuli determined a significant increase in P3b amplitude on the parietal, occipital and central electrodes compared to frequent stimuli condition, whatever was the color of the target door, while in elderly group the P3b obtained by the green and red colors was significantly different from the frequent stimulus, on the parietal, occipital, and central derivations, while the White stimulus did not evoke a significantly larger P3b with respect to frequent stimulus. The modulation of P3b amplitude, obtained by color and luminance change of target place, suggests that cortical resources, able to compensate the age-related progressive loss of cognitive performance, need to be facilitated even in normal elderly. The event-related responses obtained by virtual reality may be a reliable method to test the environmental feasibility to age-related cognitive changes.
Sonic intelligence as a virtual therapeutic environment.
Tarnanas, Ioannis; Adam, Dimitrios
2003-06-01
This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.
[Initial results with the Munich knee simulator].
Frey, M; Riener, R; Burgkart, R; Pröll, T
2002-01-01
In orthopaedics more than 50 different clinical knee joint evaluation tests exist that have to be trained in orthopaedic education. Often it is not possible to obtain sufficient practical training in a clinical environment. The training can be improved by Virtual Reality technology. In the frame of the Munich Knee Joint Simulation project an artificial leg with anatomical properties is attached by a force-torque sensor to an industrial robot. The recorded forces and torques are the input for a simple biomechanical model of the human knee joint. The robot is controlled in such way that the user gets the feeling he moves a real leg. The leg is embedded in a realistic environment with a couch and a patient on it.
ERIC Educational Resources Information Center
Hamilton, Chuck; Langlois, Kristen; Watson, Henry
2010-01-01
Informal learning is the biggest undiscovered treasure in today's workplace. Marcia Conner, author and often-cited voice for workplace learning, suggests that "Informal learning accounts for over 75% of the learning taking place in organizations today" (1997). IBM understands the value of the hyper-connected informal workplace and…
Engineering as a Social Activity: Preparing Engineers to Thrive in the Changing World of Work
ERIC Educational Resources Information Center
Joyner, Fredricka F.; Mann, Derek T. Y.; Harris, Todd
2012-01-01
Key macro-trends are combining to create a new work context for the practice of engineering. Telecommuting and virtual teams create myriad possibilities and challenges related to managing work and workers. Social network technology tools allow for unprecedented global, 24/7 collaboration. Globalization has created hyper-diverse organizations,…
Intelligent tutoring using HyperCLIPS
NASA Technical Reports Server (NTRS)
Hill, Randall W., Jr.; Pickering, Brad
1990-01-01
HyperCard is a popular hypertext-like system used for building user interfaces to databases and other applications, and CLIPS is a highly portable government-owned expert system shell. We developed HyperCLIPS in order to fill a gap in the U.S. Army's computer-based instruction tool set; it was conceived as a development environment for building adaptive practical exercises for subject-matter problem-solving, though it is not limited to this approach to tutoring. Once HyperCLIPS was developed, we set out to implement a practical exercise prototype using HyperCLIPS in order to demonstrate the following concepts: learning can be facilitated by doing; student performance evaluation can be done in real-time; and the problems in a practical exercise can be adapted to the individual student's knowledge.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2004-01-01
During the research project, sounding datasets were generated for the region surrounding 9 major airports, including Dallas, TX, Boston, MA, New York, NY, Chicago, IL, St. Louis, MO, Atlanta, GA, Miami, FL, San Francico, CA, and Los Angeles, CA. The numerical simulation of winter and summer environments during which no instrument flight rule impact was occurring at these 9 terminals was performed using the most contemporary version of the Terminal Area PBL Prediction System (TAPPS) model nested from 36 km to 6 km to 1 km horizontal resolution and very detailed vertical resolution in the planetary boundary layer. The soundings from the 1 km model were archived at 30 minute time intervals for a 24 hour period and the vertical dependent variables as well as derived quantities, i.e., 3-dimensional wind components, temperatures, pressures, mixing ratios, turbulence kinetic energy and eddy dissipation rates were then interpolated to 5 m vertical resolution up to 1000 m elevation above ground level. After partial validation against field experiment datasets for Dallas as well as larger scale and much coarser resolution observations at the other 8 airports, these sounding datasets were sent to NASA for use in the Virtual Air Space and Modeling program. The application of these datasets being to determine representative airport weather environments to diagnose the response of simulated wake vortices to realistic atmospheric environments. These virtual datasets are based on large scale observed atmospheric initial conditions that are dynamically interpolated in space and time. The 1 km nested-grid simulated datasets providing a very coarse and highly smoothed representation of airport environment meteorological conditions. Details concerning the airport surface forcing are virtually absent from these simulated datasets although the observed background atmospheric processes have been compared to the simulated fields and the fields were found to accurately replicate the flows surrounding the airport where coarse verification data were available as well as where airport scale datasets were available.
HyperCLIPS: A HyperCard interface to CLIPS
NASA Technical Reports Server (NTRS)
Pickering, Brad; Hill, Randall W., Jr.
1990-01-01
HyperCLIPS combines the intuitive, interactive user interface of the Apple Macintosh(TM) with the powerful symbolic computation of an expert system interpreter. HyperCard(TM) is an excellent environment for quickly developing the front end of an application with buttons, dialogs, and pictures, while the CLIPS interpreter provides a powerful inference engine for complex problem solving and analysis. By integrating HyperCard and CLIPS the advantages and uses of both packages are made available for a wide range of uses: rapid prototyping of knowledge-based expert systems, interactive simulations of physical systems, and intelligent control of hypertext processes, to name a few. Interfacing HyperCard and CLIPS is natural. HyperCard was designed to be extended through the use of external commands (XCMDs), and CLIPS was designed to be embedded through the use of the I/O router facilities and callable interface routines. With the exception of some technical difficulties which will be discussed later, HyperCLIPS implements this interface in a straight forward manner, using the facilities provided. An XCMD called 'ClipsX' was added to HyperCard to give access to the CLIPS routines: clear, load, reset, and run. And an I/O router was added to CLIPS to handle the communication of data between CLIPS and HyperCard.
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
Learning and Retention Using Virtual Reality in a Decontamination Simulation.
Smith, Sherrill J; Farra, Sharon; Ulrich, Deborah L; Hodgson, Eric; Nicely, Stephanie; Matcham, William
The purpose of this study was to examine the longitudinal effects of virtual reality simulation (VRS) on learning outcomes and retention. Disaster preparation for health care professionals is seriously inadequate. VRS offers an opportunity to practice within a realistic and safe environment, but little is known about learning and retention using this pedagogy. A quasiexperimental design was used to examine the use of VRS with baccalaureate nursing students in two different nursing programs in terms of the skill of decontamination. Results indicate that VRS is at least as good as traditional methods and is superior in some cases for retention of knowledge and performance of skills. VRS may provide a valuable option for promoting skill development and retention. More research is needed to determine how to prepare nurses for skills that may not be required until months or even years after initial introduction.
NASA Technical Reports Server (NTRS)
Blackmon, Theodore
1998-01-01
Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.
Enhancing a Multi-body Mechanism with Learning-Aided Cues in an Augmented Reality Environment
NASA Astrophysics Data System (ADS)
Singh Sidhu, Manjit
2013-06-01
Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.
Virtual Physical Therapy Clinician: Development, Validation and Testing
ERIC Educational Resources Information Center
Huhn, Karen
2011-01-01
Introduction: Clinical reasoning skills develop through repeated practice in realistic patient scenarios. Time constraints, declining availability of clinical education sites and patient safety are some of the factors that limit physical therapy educators' ability to expose students to realistic patient scenarios. Computerized simulations may be…
GPU-based efficient realistic techniques for bleeding and smoke generation in surgical simulators.
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-12-01
In actual surgery, smoke and bleeding due to cauterization processes provide important visual cues to the surgeon, which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated the effects of bleeding and smoke generation, they are not realistic due to the requirement of real-time performance. To be interactive, visual update must be performed at at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques, since other computationally intensive processes compete for the available Central Processing Unit (CPU) resources. In this study we developed a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators, which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. The smoke and bleeding simulation were implemented as part of a laparoscopic adjustable gastric banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur noticeable overhead. However, for smoke generation, an input/output (I/O) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited to VR-based surgical simulators. Copyright © 2010 John Wiley & Sons, Ltd.
GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-01-01
Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651
Beavis, A; Saunderson, J; Ward, J
2012-06-01
Recently there has been great interest in the use of simulation training, with the view to enhance safety within radiotherapy practice. We have developed a Virtual Environment for Radiotherapy Training (VERT) which facilitates this, including the simulation of a number of 'Physics practices'. One such process is the calibration of an ionisation chamber for use in Linac photon beams. The VERT system was used to provide a life sized 3D virtual environment within which we were able to simulate the calibration of a departmental chamber for 6MV and 15 MV beams following the UK 1990 Code of Practice. The characteristics of the beams are fixed parameters in the simulation, whereas default (Absorbed dose to water) correction factors of the chambers are configurable thereby dictating their response in the virtual x-ray beam. When the simulation is started, a random, realistic temperature and pressure is assigned to the bunker. Measurement and chamber positional errors are assigned to the chambers. A virtual water phantom was placed on the Linac couch and irradiated through the side using a 10 × 10 field. With a chamber at the appropriate depths and irradiated iso-centrically, the Quality Indices (QI) of the beams were obtained. The two chambers were 'inter-compared', allowing the departmental chamber calibration factor to be calculated from that of the reference chamber. For the virtual 6/15 MV beams, the QI were found to be 0.668/ 0.761 and the inter-comparison ratios 0.4408/ 0.4402 respectively. The departmental chamber calibration factors were calculated; applying these and appropriate environmental corrections allowed the output of the Linac to be confirmed. We have shown how a virtual training environment can be used to demonstrate practical processes and reinforce learning. The UK CoP was used here, however any relevant protocol could be demonstrated. Two of the authors (Beavis and Ward) are Founders of Vertual Ltd, a spin-out company created to commercialise the research presented in this abstract. © 2012 American Association of Physicists in Medicine.
Using PVDF to locate the debris cloud impact position
NASA Astrophysics Data System (ADS)
Pang, Baojun; Liu, Zhidong
2010-03-01
With the increase of space activities, space debris environment has deteriorated. Space debris impact shields of spacecraft creates debris cloud, the debris cloud is a threat to module wall. In order to conduct an assessment of spacecraft module wall damage impacted by debris cloud, the damage position must be known. In order to design a light weight location system, polyvinylidene fluoride (PVDF) has been studied. Hyper-velocity impact experiments were conducted using two-stage light gas gun, the experimental results indicate that: the virtual wave front location method can be extended to debris cloud impact location, PVDF can be used to locate the damage position effectively, the signals gathered by PVDF from debris cloud impact contain more high frequency components than the signals created by single projectile impact event. The results provide a reference for the development of the sensor systems to detect impacts on spacecraft.
NASA Technical Reports Server (NTRS)
Hill, Randall W., Jr.
1990-01-01
The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.
NASA Astrophysics Data System (ADS)
Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.
2009-05-01
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.
Feasibility of training athletes for high-pressure situations using virtual reality.
Stinson, Cheryl; Bowman, Doug A
2014-04-01
Virtual reality (VR) has been successfully applied to a broad range of training domains; however, to date there is little research investigating its benefits for sport psychology training. We hypothesized that using high-fidelity VR systems to display realistic 3D sport environments could trigger anxiety, allowing resilience-training systems to prepare athletes for real-world, highpressure situations. In this work we investigated the feasibility and usefulness of using VR for sport psychology training. We developed a virtual soccer goalkeeping application for the Virginia Tech Visionarium VisCube (a CAVE-like display system), in which users defend against simulated penalty kicks using their own bodies. Using the application, we ran a controlled, within-subjects experiment with three independent variables: known anxiety triggers, field of regard, and simulation fidelity. The results demonstrate that a VR sport-oriented system can induce increased anxiety (physiological and subjective measures) compared to a baseline condition. There were a number of main effects and interaction effects for all three independent variables in terms of the subjective measures of anxiety. Both known anxiety triggers and simulation fidelity had a direct relationship to anxiety, while field of regard had an inverse relationship. Overall, the results demonstrate great potential for VR sport psychology training systems; however, further research is needed to determine if training in a VR environment can lead to long-term reduction in sport-induced anxiety.
VRLane: a desktop virtual safety management program for underground coal mine
NASA Astrophysics Data System (ADS)
Li, Mei; Chen, Jingzhu; Xiong, Wei; Zhang, Pengpeng; Wu, Daozheng
2008-10-01
VR technologies, which generate immersive, interactive, and three-dimensional (3D) environments, are seldom applied to coal mine safety work management. In this paper, a new method that combined the VR technologies with underground mine safety management system was explored. A desktop virtual safety management program for underground coal mine, called VRLane, was developed. The paper mainly concerned about the current research advance in VR, system design, key techniques and system application. Two important techniques were introduced in the paper. Firstly, an algorithm was designed and implemented, with which the 3D laneway models and equipment models can be built on the basis of the latest mine 2D drawings automatically, whereas common VR programs established 3D environment by using 3DS Max or the other 3D modeling software packages with which laneway models were built manually and laboriously. Secondly, VRLane realized system integration with underground industrial automation. VRLane not only described a realistic 3D laneway environment, but also described the status of the coal mining, with functions of displaying the run states and related parameters of equipment, per-alarming the abnormal mining events, and animating mine cars, mine workers, or long-wall shearers. The system, with advantages of cheap, dynamic, easy to maintenance, provided a useful tool for safety production management in coal mine.
Evaluation of the Effectiveness of LTR Training versus Simulation Training and Stress Inoculation
2016-10-01
training must be completed during a short period of time and in a “ hyper -realistic” stressful and shocking environment. An understanding of how the...period of time and in a “ hyper -realistic” stressful and shocking environment. An understanding of how the training modality impacts the translation of...understand systems and how they change with a specific focus on identifying and evaluating the impact of feedback loops , accumulations (stocks and flows
Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?
Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.
2010-01-01
This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Graphic and haptic simulation system for virtual laparoscopic rectum surgery.
Pan, Jun J; Chang, Jian; Yang, Xiaosong; Zhang, Jian J; Qureshi, Tahseen; Howell, Robert; Hickish, Tamas
2011-09-01
Medical simulators with vision and haptic feedback techniques offer a cost-effective and efficient alternative to the traditional medical trainings. They have been used to train doctors in many specialties of medicine, allowing tasks to be practised in a safe and repetitive manner. This paper describes a virtual-reality (VR) system which will help to influence surgeons' learning curves in the technically challenging field of laparoscopic surgery of the rectum. Data from MRI of the rectum and real operation videos are used to construct the virtual models. A haptic force filter based on radial basis functions is designed to offer realistic and smooth force feedback. To handle collision detection efficiently, a hybrid model is presented to compute the deformation of intestines. Finally, a real-time cutting technique based on mesh is employed to represent the incision operation. Despite numerous research efforts, fast and realistic solutions of soft tissues with large deformation, such as intestines, prove extremely challenging. This paper introduces our latest contribution to this endeavour. With this system, the user can haptically operate with the virtual rectum and simultaneously watch the soft tissue deformation. Our system has been tested by colorectal surgeons who believe that the simulated tactile and visual feedbacks are realistic. It could replace the traditional training process and effectively transfer surgical skills to novices. Copyright © 2011 John Wiley & Sons, Ltd.
Multimodal person authentication on a smartphone under realistic conditions
NASA Astrophysics Data System (ADS)
Morris, Andrew C.; Jassim, Sabah; Sellahewa, Harin; Allano, Lorene; Ehlers, Johan; Wu, Dalei; Koreman, Jacques; Garcia-Salicetti, Sonia; Ly-Van, Bao; Dorizzi, Bernadette
2006-05-01
Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA). Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.
Exposing and Deposing Hyper-Economized School Science
ERIC Educational Resources Information Center
Bencze, John Lawrence
2010-01-01
Despite indications of the problematic nature of "laissez faire" capitalism, such as the convictions of corporate leaders and the global financial crisis that appeared to largely stem from a de-regulated financial services industry, it seems clear that societies and environments continue to be strongly influenced by hyper-economized…
Altena, Ellemarije; Daviaux, Yannick; Sanz-Arigita, Ernesto; Bonhomme, Emilien; de Sevin, Étienne; Micoulaud-Franchi, Jean-Arthur; Bioulac, Stéphanie; Philip, Pierre
2018-04-17
Virtual reality and simulation tools enable us to assess daytime functioning in environments that simulate real life as close as possible. Simulator sickness, however, poses a problem in the application of these tools, and has been related to pre-existing health problems. How sleep problems contribute to simulator sickness has not yet been investigated. In the current study, 20 female chronic insomnia patients and 32 female age-matched controls drove in a driving simulator covering realistic city, country and highway scenes. Fifty percent of the insomnia patients as opposed to 12.5% of controls reported excessive simulator sickness leading to experiment withdrawal. In the remaining participants, patients with insomnia showed overall increased levels of oculomotor symptoms even before driving, while nausea symptoms further increased after driving. These results, as well as the realistic simulation paradigm developed, give more insight on how vestibular and oculomotor functions as well as interoceptive functions are affected in insomnia. Importantly, our results have direct implications for both the actual driving experience and the wider context of deploying simulation techniques to mimic real life functioning, in particular in those professions often exposed to sleep problems. © 2018 European Sleep Research Society.
NASA Technical Reports Server (NTRS)
Rausch, Vincent L.; McClinton, Charles R.; Sitz, Joel; Reukauf, Paul
2000-01-01
This paper provides an overview of the objectives and status of the Hyper-X program which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment, the last stage preceding prototype development. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7 in the near future. In addition, the associated booster and vehicle-to-booster adapter are being prepared for flight and flight test preparations are well underway. Extensive risk reduction activities for the first flight and non-recurring design for the Mach 10 X-43 (3rd flight) are nearing completion. The Mach 7 flight of the X-43 will be the first flight of an airframe-integrated scramjet-powered vehicle.
Hyper-X Stage Separation: Background and Status
NASA Technical Reports Server (NTRS)
Reubush, David E.
1999-01-01
This paper provides an overview of stage separation activities for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current stage separation concept, highlights of wind tunnel experiments and computational fluid dynamics investigations being conducted to define the separation event, results from ground tests of separation hardware, schedule and status. Substantial work has been completed toward reducing the risk associated with stage separation.
NASA Astrophysics Data System (ADS)
Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.
2007-04-01
US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic target - background scenes and display the results in a DirectX environment. This paper will describe our approach and show a brief demonstration of the software capabilities. The work is supported by the SBIR program under contract N61339-06-C-0113.
Scott, Clinton T.; Slack, John F.; Kelley, Karen Duttweiler
2017-01-01
Black shales of the Late Devonian to Early Mississippian Bakken Formation are characterized by high concentrations of organic carbon and the hyper-enrichment (> 500 to 1000s of mg/kg) of V and Zn. Deposition of black shales resulted from shallow seafloor depths that promoted rapid development of euxinic conditions. Vanadium hyper-enrichments, which are unknown in modern environments, are likely the result of very high levels of dissolved H2S (~ 10 mM) in bottom waters or sediments. Because modern hyper-enrichments of Zn are documented only in Framvaren Fjord (Norway), it is likely that the biogeochemical trigger responsible for Zn hyper-enrichment in Framvaren Fjord was also present in the Bakken basin. With Framvaren Fjord as an analogue, we propose a causal link between the activity of phototrophic sulfide oxidizing bacteria, related to the development of photic-zone euxinia, and the hyper-enrichment of Zn in black shales of the Bakken Formation.
ERIC Educational Resources Information Center
Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen
2014-01-01
Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…
Modeling human perception of orientation in altered gravity
Clark, Torin K.; Newman, Michael C.; Oman, Charles M.; Merfeld, Daniel M.; Young, Laurence R.
2015-01-01
Altered gravity environments, such as those experienced by astronauts, impact spatial orientation perception, and can lead to spatial disorientation and sensorimotor impairment. To more fully understand and quantify the impact of altered gravity on orientation perception, several mathematical models have been proposed. The utricular shear, tangent, and the idiotropic vector models aim to predict static perception of tilt in hyper-gravity. Predictions from these prior models are compared to the available data, but are found to systematically err from the perceptions experimentally observed. Alternatively, we propose a modified utricular shear model for static tilt perception in hyper-gravity. Previous dynamic models of vestibular function and orientation perception are limited to 1 G. Specifically, they fail to predict the characteristic overestimation of roll tilt observed in hyper-gravity environments. To address this, we have proposed a modification to a previous observer-type canal-otolith interaction model based upon the hypothesis that the central nervous system (CNS) treats otolith stimulation in the utricular plane differently than stimulation out of the utricular plane. Here we evaluate our modified utricular shear and modified observer models in four altered gravity motion paradigms: (a) static roll tilt in hyper-gravity, (b) static pitch tilt in hyper-gravity, (c) static roll tilt in hypo-gravity, and (d) static pitch tilt in hypo-gravity. The modified models match available data in each of the conditions considered. Our static modified utricular shear model and dynamic modified observer model may be used to help quantitatively predict astronaut perception of orientation in altered gravity environments. PMID:25999822
WE-G-BRA-04: The Development of a Virtual Reality Dosimetry Training Platform for Physics Training.
Beavis, A; Ward, J
2012-06-01
Recently there has been a great deal of interest in the application of Simulation methodologies for training. We have previously developed a Virtual Environment for Radiotherapy Training, VERT, which simulates a fully interactive and functional Linac. Patient and plan data can be accessed across a DICOM interface, allowing the treatment process to be simulated. Here we present a newly developed range of Physics equipment, which allows the user to undertake realistic QC processes. Five devices are available: 1) scanning water phantom, 2) 'solid water' QC block/ion chamber, 3) light/ radiation field coincidence phantom, 4) laser alignment phantom and 5) water based calibration phantom with reference class and 'departmental' ion chamber. The devices were created to operate realistically and function as expected, each has an associated control screen which provides control and feedback information. The dosimetric devices respond appropriately to the beam qualities available on the Linac. Geometrical characteristics of the Linac, e.g. isocentre integrity, laser calibration and jaw calibrations can have random errors introduced in order to enable the user learn and observe fault conditions. In the calibration module appropriate factors for temperature and pressure must be set to correct for ambient, simulated, room conditions. The dosimetric devices can be used to characterise the Linac beams. Depth doses with Dmax of 15mm/29mm and d10 of 67%/77% respectively for 10cm square 6/15MV beams were measured. The Quality Indices (TPR20/10 ratios) can be measured as 0.668 and 0.761 respectively. At a simple level the tools can be used to demonstrate beam divergence or the effect of the inverse square law; They are also designed to be used to simulate the calibration of a new ion chamber. We have developed a novel set of tools that allow education of Physics processes via simulation training in our virtual environment. Both Authors are Founders and Directors of Vertual Ltd, a spin-out company that exists to commericalise the results of the research work presented in this abstract. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar
2018-04-01
Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.
A Gaia DR2 Mock Stellar Catalog
NASA Astrophysics Data System (ADS)
Rybizki, Jan; Demleitner, Markus; Fouesneau, Morgan; Bailer-Jones, Coryn; Rix, Hans-Walter; Andrae, René
2018-07-01
We present a mock catalog of Milky Way stars, matching in volume and depth the content of the Gaia data release 2 (GDR2). We generated our catalog using Galaxia, a tool to sample stars from a Besançon Galactic model, together with a realistic 3D dust extinction map. The catalog mimics the complete GDR2 data model and contains most of the entries in the Gaia source catalog: five-parameter astrometry, three-band photometry, radial velocities, stellar parameters, and associated scaled nominal uncertainty estimates. In addition, we supplemented the catalog with extinctions and photometry for non-Gaia bands. This catalog can be used to prepare GDR2 queries in a realistic runtime environment, and it can serve as a Galactic model against which to compare the actual GDR2 data in the space of observables. The catalog is hosted through the virtual observatory GAVO’s Heidelberg data center (http://dc.g-vo.org/tableinfo/gdr2mock.main) service, and thus can be queried using ADQL as for GDR2 data.
Fieldwork Skills in Virtual Worlds
NASA Astrophysics Data System (ADS)
Craven, Benjamin; Lloyd, Geoffrey; Gordon, Clare; Houghton, Jacqueline; Morgan, Daniel
2017-04-01
Virtual reality has an increasingly significant role to play in teaching and research, but for geological applications realistic landscapes are required that contain sufficient detail to prove viable for investigation by both inquisitive students and critical researchers. To create such virtual landscapes, we combine DTM data with digitally modelled outcrops in the game engine Unity. Our current landscapes are fictional worlds, invented to focus on generation techniques and the strategic and spatial immersion within a digital environment. These have proved very successful in undergraduate teaching; however, we are now moving onto recreating real landscapes for more advanced teaching and research. The first of these is focussed on Rhoscolyn, situated within the Ynys Mon Geopark on Anglesey, UK. It is a popular area for both teaching and research in structural geology so has a wide usage demographic. The base of the model is created from DTM data, both 1 m LiDAR and 5 m GPS point data, and manipulated with QGIS before import to Unity. Substance is added to the world via models of architectural elements (e.g. walls and buildings) and appropriate flora and fauna, including sounds. Texturing of these models is performed using 25 cm aerial imagery and field photographs. Whilst such elements enhance immersion, it is the use of digital outcrop models that fully completes the experience. From fieldwork, we have a library of photogrammetric outcrops that can be modelled into 3D features using free (VisualSFM and MeshLab) and non-free (AgiSoft Photoscan) tools. These models are then refined and converted in Maya to create models for better insertion into the Unity environment. The finished product is a virtual landscape; a Rhoscolyn `world' that is sufficiently detailed to provide a base not only for geological teaching and training but also for geological research. Additionally, the `Rhoscolyn World' represents a significant tool for those students who are unable to attend conventional field classes and really enhances their learning experience. This project is part of the larger Virtual Landscapes project, which is a collaboration between The University of Leeds and Leeds College of Art, UK. All our current virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.
NASA Technical Reports Server (NTRS)
Hill, R. W.
1994-01-01
The integration of CLIPS into HyperCard combines the intuitive, interactive user interface of the Macintosh with the powerful symbolic computation of an expert system interpreter. HyperCard is an excellent environment for quickly developing the front end of an application with buttons, dialogs, and pictures, while the CLIPS interpreter provides a powerful inference engine for complex problem solving and analysis. In order to understand the benefit of integrating HyperCard and CLIPS, consider the following: HyperCard is an information storage and retrieval system which exploits the use of the graphics and user interface capabilities of the Apple Macintosh computer. The user can easily define buttons, dialog boxes, information templates, pictures, and graphic displays through the use of the HyperCard tools and scripting language. What is generally lacking in this environment is a powerful reasoning engine for complex problem solving, and this is where CLIPS plays a role. CLIPS 5.0 (C Language Integrated Production System, v5.0) was developed at the Johnson Space Center Software Technology Branch to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 5.0 supports forward chaining rule systems, object-oriented language, and procedural programming for the construction of expert systems. It features incremental reset, seven conflict resolution stategies, truth maintenance, and user-defined external functions. Since CLIPS is implemented in the C language it is highly portable; in addition, it is embeddable as a callable routine from a program written in another language such as Ada or Fortran. By integrating HyperCard and CLIPS the advantages and uses of both packages are made available for a wide range of applications: rapid prototyping of knowledge-based expert systems, interactive simulations of physical systems and intelligent control of hypertext processes, to name a few. HyperCLIPS 2.0 is written in C-Language (54%) and Pascal (46%) for Apple Macintosh computers running Macintosh System 6.0.2 or greater. HyperCLIPS requires HyperCard 1.2 or higher and at least 2Mb of RAM are recommended to run. An executable is provided. To compile the source code, the Macintosh Programmer's Workshop (MPW) version 3.0, CLIPS 5.0 (MSC-21927), and the MPW C-Language compiler are also required. NOTE: Installing this program under Macintosh System 7 requires HyperCard v2.1. This program is distributed on a 3.5 inch Macintosh format diskette. A copy of the program documentation is included on the diskette, but may be purchased separately. HyperCLIPS was developed in 1990 and version 2.0 was released in 1991. HyperCLIPS is a copyrighted work with all copyright vested in NASA. Apple, Macintosh, MPW, and HyperCard are registered trademarks of Apple Computer, Inc.
Evaluation of Pseudo-Haptic Interactions with Soft Objects in Virtual Environments.
Li, Min; Sareh, Sina; Xu, Guanghua; Ridzuan, Maisarah Binti; Luo, Shan; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2016-01-01
This paper proposes a pseudo-haptic feedback method conveying simulated soft surface stiffness information through a visual interface. The method exploits a combination of two feedback techniques, namely visual feedback of soft surface deformation and control of the indenter avatar speed, to convey stiffness information of a simulated surface of a soft object in virtual environments. The proposed method was effective in distinguishing different sizes of virtual hard nodules integrated into the simulated soft bodies. To further improve the interactive experience, the approach was extended creating a multi-point pseudo-haptic feedback system. A comparison with regards to (a) nodule detection sensitivity and (b) elapsed time as performance indicators in hard nodule detection experiments to a tablet computer incorporating vibration feedback was conducted. The multi-point pseudo-haptic interaction is shown to be more time-efficient than the single-point pseudo-haptic interaction. It is noted that multi-point pseudo-haptic feedback performs similarly well when compared to a vibration-based feedback method based on both performance measures elapsed time and nodule detection sensitivity. This proves that the proposed method can be used to convey detailed haptic information for virtual environmental tasks, even subtle ones, using either a computer mouse or a pressure sensitive device as an input device. This pseudo-haptic feedback method provides an opportunity for low-cost simulation of objects with soft surfaces and hard inclusions, as, for example, occurring in ever more realistic video games with increasing emphasis on interaction with the physical environment and minimally invasive surgery in the form of soft tissue organs with embedded cancer nodules. Hence, the method can be used in many low-budget applications where haptic sensation is required, such as surgeon training or video games, either using desktop computers or portable devices, showing reasonably high fidelity in conveying stiffness perception to the user.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Problem-Based Learning in Instrumentation: Synergism of Real and Virtual Modular Acquisition Chains
ERIC Educational Resources Information Center
Nonclercq, A.; Biest, A. V.; De Cuyper, K.; Leroy, E.; Martinez, D. L.; Robert, F.
2010-01-01
As part of an instrumentation course, a problem-based learning framework was selected for laboratory instruction. Two acquisition chains were designed to help students carry out realistic instrumentation problems. The first tool is a virtual (simulated) modular acquisition chain that allows rapid overall understanding of the main problems in…
Using Virtual Technology to Enhance Field Experiences for Pre-Service Special Education Teachers
ERIC Educational Resources Information Center
Billingsley, Glenna M.; Scheuermann, Brenda K.
2014-01-01
Teacher educators of pre-service teachers of students with special needs face challenges in providing the unique knowledge and skills required of highly qualified special education teachers. The emerging use of various forms of virtual technology, however, offers realistic solutions to these problems. This systematic review of literature examines…
IMMERSE: Interactive Mentoring for Multimodal Experiences in Realistic Social Encounters
2015-08-28
undergraduates funded by your agreement who graduated during this period and will receive scholarships or fellowships for further studies in science... Player Locomotion 9.2 Interacting with Real and Virtual Objects 9.3 Animation Combinations and Stage Management 10. Recommendations on the Way Ahead...Interaction with Virtual Characters ................................52! 9.1! Player Locomotion
On the usefulness of the concept of presence in virtual reality applications
NASA Astrophysics Data System (ADS)
Mestre, Daniel R.
2015-03-01
Virtual Reality (VR) leads to realistic experimental situations, while enabling researchers to have deterministic control on these situations, and to precisely measure participants' behavior. However, because more realistic and complex situations can be implemented, important questions arise, concerning the validity and representativeness of the observed behavior, with reference to a real situation. One example is the investigation of a critical (virtually dangerous) situation, in which the participant knows that no actual threat is present in the simulated situation, and might thus exhibit a behavioral response that is far from reality. This poses serious problems, for instance in training situations, in terms of transfer of learning to a real situation. Facing this difficult question, it seems necessary to study the relationships between three factors: immersion (physical realism), presence (psychological realism) and behavior. We propose a conceptual framework, in which presence is a necessary condition for the emergence of a behavior that is representative of what is observed in real conditions. Presence itself depends not only on physical immersive characteristics of the Virtual Reality setup, but also on contextual and psychological factors.
An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform
NASA Astrophysics Data System (ADS)
Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang
2018-06-01
This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.
Exploring the simulation requirements for virtual regional anesthesia training
NASA Astrophysics Data System (ADS)
Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.
2010-01-01
This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.
An innovative virtual reality training tool for orthognathic surgery.
Pulijala, Y; Ma, M; Pears, M; Peebles, D; Ayoub, A
2018-02-01
Virtual reality (VR) surgery using Oculus Rift and Leap Motion devices is a multi-sensory, holistic surgical training experience. A multimedia combination including 360° videos, three-dimensional interaction, and stereoscopic videos in VR has been developed to enable trainees to experience a realistic surgery environment. The innovation allows trainees to interact with the individual components of the maxillofacial anatomy and apply surgical instruments while watching close-up stereoscopic three-dimensional videos of the surgery. In this study, a novel training tool for Le Fort I osteotomy based on immersive virtual reality (iVR) was developed and validated. Seven consultant oral and maxillofacial surgeons evaluated the application for face and content validity. Using a structured assessment process, the surgeons commented on the content of the developed training tool, its realism and usability, and the applicability of VR surgery for orthognathic surgical training. The results confirmed the clinical applicability of VR for delivering training in orthognathic surgery. Modifications were suggested to improve the user experience and interactions with the surgical instruments. This training tool is ready for testing with surgical trainees. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Cobbett, Shelley; Snelgrove-Clarke, Erna
2016-10-01
Clinical simulations can provide students with realistic clinical learning environments to increase their knowledge, self-confidence, and decrease their anxiety prior to entering clinical practice settings. To compare the effectiveness of two maternal newborn clinical simulation scenarios; virtual clinical simulation and face-to-face high fidelity manikin simulation. Randomized pretest-posttest design. A public research university in Canada. Fifty-six third year Bachelor of Science in Nursing students. Participants were randomized to either face-to-face or virtual clinical simulation and then to dyads for completion of two clinical simulations. Measures included: (1) Nursing Anxiety and Self-Confidence with Clinical Decision Making Scale (NASC-CDM) (White, 2011), (2) knowledge pretest and post-test related to preeclampsia and group B strep, and (3) Simulation Completion Questionnaire. Before and after each simulation students completed a knowledge test and the NASC-CDM and the Simulation Completion Questionnaire at study completion. There were no statistically significant differences in student knowledge and self-confidence between face-to-face and virtual clinical simulations. Anxiety scores were higher for students in the virtual clinical simulation than for those in the face-to-face simulation. Students' self-reported preference was face-to-face citing the similarities to practicing in a 'real' situation and the immediate debrief. Students not liking the virtual clinical simulation most often cited technological issues as their rationale. Given the equivalency of knowledge and self-confidence when undergraduate nursing students participate in either maternal newborn clinical scenarios of face-to-face or virtual clinical simulation identified in this trial, it is important to take into the consideration costs and benefits/risks of simulation implementation. Copyright © 2016 Elsevier Ltd. All rights reserved.
High fidelity wireless network evaluation for heterogeneous cognitive radio networks
NASA Astrophysics Data System (ADS)
Ding, Lei; Sagduyu, Yalin; Yackoski, Justin; Azimi-Sadjadi, Babak; Li, Jason; Levy, Renato; Melodia, Tammaso
2012-06-01
We present a high fidelity cognitive radio (CR) network emulation platform for wireless system tests, measure- ments, and validation. This versatile platform provides the configurable functionalities to control and repeat realistic physical channel effects in integrated space, air, and ground networks. We combine the advantages of scalable simulation environment with reliable hardware performance for high fidelity and repeatable evaluation of heterogeneous CR networks. This approach extends CR design only at device (software-defined-radio) or lower-level protocol (dynamic spectrum access) level to end-to-end cognitive networking, and facilitates low-cost deployment, development, and experimentation of new wireless network protocols and applications on frequency- agile programmable radios. Going beyond the channel emulator paradigm for point-to-point communications, we can support simultaneous transmissions by network-level emulation that allows realistic physical-layer inter- actions between diverse user classes, including secondary users, primary users, and adversarial jammers in CR networks. In particular, we can replay field tests in a lab environment with real radios perceiving and learning the dynamic environment thereby adapting for end-to-end goals over distributed spectrum coordination channels that replace the common control channel as a single point of failure. CR networks offer several dimensions of tunable actions including channel, power, rate, and route selection. The proposed network evaluation platform is fully programmable and can reliably evaluate the necessary cross-layer design solutions with configurable op- timization space by leveraging the hardware experiments to represent the realistic effects of physical channel, topology, mobility, and jamming on spectrum agility, situational awareness, and network resiliency. We also provide the flexibility to scale up the test environment by introducing virtual radios and establishing seamless signal-level interactions with real radios. This holistic wireless evaluation approach supports a large-scale, het- erogeneous, and dynamic CR network architecture and allows developing cross-layer network protocols under high fidelity, repeatable, and scalable wireless test scenarios suitable for heterogeneous space, air, and ground networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, MM; University of California San Diego, La Jolla, California; Long, T
Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less
ERIC Educational Resources Information Center
Stuer, Peter; Meersman, Robert; De Bruyne, Steven
Museums have always been, sometimes directly and often indirectly, a key resource of arts and cultural heritage information for the classroom educator. The Web now offers an ideal way of taking this resource beyond the traditional textbook or school visit. While museums around the globe are embracing the web and putting virtual exhibitions,…
Virtual environment tactile system
Renzi, Ronald
1996-01-01
A method for providing a realistic sense of touch in virtual reality by means of programmable actuator assemblies is disclosed. Each tactile actuator assembly consists of a number of individual actuators whose movement is controlled by a computer and associated drive electronics. When an actuator is energized, the rare earth magnet and the associated contactor, incorporated within the actuator, are set in motion by the opposing electromagnetic field of a surrounding coil. The magnet pushes the contactor forward to contact the skin resulting in the sensation of touch. When the electromagnetic field is turned off, the rare earth magnet and the contactor return to their neutral positions due to the magnetic equilibrium caused by the interaction with the ferrous outer sleeve. The small size and flexible nature of the actuator assemblies permit incorporation into a glove, boot or body suit. The actuator has additional applications, such as, for example, as an accelerometer, an actuator for precisely controlled actuations or to simulate the sensation of braille letters.
Virtual environment tactile system
Renzi, R.
1996-12-10
A method for providing a realistic sense of touch in virtual reality by means of programmable actuator assemblies is disclosed. Each tactile actuator assembly consists of a number of individual actuators whose movement is controlled by a computer and associated drive electronics. When an actuator is energized, the rare earth magnet and the associated contactor, incorporated within the actuator, are set in motion by the opposing electromagnetic field of a surrounding coil. The magnet pushes the contactor forward to contact the skin resulting in the sensation of touch. When the electromagnetic field is turned off, the rare earth magnet and the contactor return to their neutral positions due to the magnetic equilibrium caused by the interaction with the ferrous outer sleeve. The small size and flexible nature of the actuator assemblies permit incorporation into a glove, boot or body suit. The actuator has additional applications, such as, for example, as an accelerometer, an actuator for precisely controlled actuations or to simulate the sensation of braille letters. 28 figs.
Development of a virtual reality assessment of everyday living skills.
Ruse, Stacy A; Davis, Vicki G; Atkins, Alexandra S; Krishnan, K Ranga R; Fox, Kolleen H; Harvey, Philip D; Keefe, Richard S E
2014-04-23
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes. Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements. Measures of "functional capacity" index the extent to which individuals have the potential to perform skills required for real world functioning. Current data do not support the recommendation of any single instrument for measurement of functional capacity. The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT's sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
V-Man Generation for 3-D Real Time Animation. Chapter 5
NASA Technical Reports Server (NTRS)
Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang
2007-01-01
The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.
Virtual Simulations: A Creative, Evidence-Based Approach to Develop and Educate Nurses.
Leibold, Nancyruth; Schwarz, Laura
2017-02-01
The use of virtual simulations in nursing is an innovative strategy that is increasing in application. There are several terms related to virtual simulation; although some are used interchangeably, the meanings are not the same. This article presents examples of virtual simulation, virtual worlds, and virtual patients in continuing education, staff development, and academic nursing education. Virtual simulations in nursing use technology to provide safe, as realistic as possible clinical practice for nurses and nursing students. Virtual simulations are useful for learning new skills; practicing a skill that puts content, high-order thinking, and psychomotor elements together; skill competency learning; and assessment for low-volume, high-risk skills. The purpose of this article is to describe the related terms, examples, uses, theoretical frameworks, challenges, and evidence related to virtual simulations in nursing.
NASA Astrophysics Data System (ADS)
Ding, Yea-Chung
2010-11-01
In recent years national parks worldwide have introduced online virtual tourism, through which potential visitors can search for tourist information. Most virtual tourism websites are a simulation of an existing location, usually composed of panoramic images, a sequence of hyperlinked still or video images, and/or virtual models of the actual location. As opposed to actual tourism, a virtual tour is typically accessed on a personal computer or an interactive kiosk. Using modern Digital Earth techniques such as high resolution satellite images, precise GPS coordinates and powerful 3D WebGIS, however, it's possible to create more realistic scenic models to present natural terrain and man-made constructions in greater detail. This article explains how to create an online scientific reality tourist guide for the Jinguashi Gold Ecological Park at Jinguashi in northern Taiwan, China. This project uses high-resolution Formosat 2 satellite images and digital aerial images in conjunction with DTM to create a highly realistic simulation of terrain, with the addition of 3DMAX to add man-made constructions and vegetation. Using this 3D Geodatabase model in conjunction with INET 3D WebGIS software, we have found Digital Earth concept can greatly improve and expand the presentation of traditional online virtual tours on the websites.
Virtual reality: new method of teaching anorectal and pelvic floor anatomy.
Dobson, Howard D; Pearl, Russell K; Orsay, Charles P; Rasmussen, Mary; Evenhouse, Ray; Ai, Zhuming; Blew, Gregory; Dech, Fred; Edison, Marcia I; Silverstein, Jonathan C; Abcarian, Herand
2003-03-01
A clear understanding of the intricate spatial relationships among the structures of the pelvic floor, rectum, and anal canal is essential for the treatment of numerous pathologic conditions. Virtual-reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereoscopic-vision, viewer-centered perspective, large angles of view, and interactivity. We describe a novel virtual reality-based model designed to teach anorectal and pelvic floor anatomy, pathology, and surgery. A static physical model depicting the pelvic floor and anorectum was created and digitized at 1-mm intervals in a CT scanner. Multiple software programs were used along with endoscopic images to generate a realistic interactive computer model, which was designed to be viewed on a networked, interactive, virtual-reality display (CAVE or ImmersaDesk). A standard examination of ten basic anorectal and pelvic floor anatomy questions was administered to third-year (n = 6) and fourth-year (n = 7) surgical residents. A workshop using the Virtual Pelvic Floor Model was then given, and the standard examination was readministered so that it was possible to evaluate the effectiveness of the Digital Pelvic Floor Model as an educational instrument. Training on the Virtual Pelvic Floor Model produced substantial improvements in the overall average test scores for the two groups, with an overall increase of 41 percent (P = 0.001) and 21 percent (P = 0.0007) for third-year and fourth-year residents, respectively. Resident evaluations after the workshop also confirmed the effectiveness of understanding pelvic anatomy using the Virtual Pelvic Floor Model. This model provides an innovative interactive educational framework that allows educators to overcome some of the barriers to teaching surgical and endoscopic principles based on understanding highly complex three-dimensional anatomy. Using this collaborative, shared virtual-reality environment, teachers and students can interact from locations world-wide to manipulate the components of this model to achieve the educational goals of this project along with the potential for virtual surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridge, Pete, E-mail: pete.bridge@qut.edu.au; Gunn, Therese; Kastanis, Lazaros
A novel realistic 3D virtual reality (VR) application has been developed to allow medical imaging students at Queensland University of Technology to practice radiographic techniques independently outside the usual radiography laboratory. A flexible agile development methodology was used to create the software rapidly and effectively. A 3D gaming environment and realistic models were used to engender presence in the software while tutor-determined gold standards enabled students to compare their performance and learn in a problem-based learning pedagogy. Students reported high levels of satisfaction and perceived value and the software enabled up to 40 concurrent users to prepare for clinical practice.more » Student feedback also indicated that they found 3D to be of limited value in the desktop version compared to the usual 2D approach. A randomised comparison between groups receiving software-based and traditional practice measured performance in a formative role play with real equipment. The results of this work indicated superior performance with the equipment for the VR trained students (P = 0.0366) and confirmed the value of VR for enhancing 3D equipment-based problem-solving skills. Students practising projection techniques virtually performed better at role play assessments than students practising in a traditional radiography laboratory only. The application particularly helped with 3D equipment configuration, suggesting that teaching 3D problem solving is an ideal use of such medical equipment simulators. Ongoing development work aims to establish the role of VR software in preparing students for clinical practice with a range of medical imaging equipment.« less
Microbial colonization of halite from the hyper-arid Atacama Desert studied by Raman spectroscopy.
Vítek, P; Edwards, H G M; Jehlicka, J; Ascaso, C; De los Ríos, A; Valea, S; Jorge-Villar, S E; Davila, A F; Wierzchos, J
2010-07-13
The hyper-arid core of the Atacama Desert (Chile) is the driest place on Earth and is considered a close analogue to the extremely arid conditions on the surface of Mars. Microbial life is very rare in soils of this hyper-arid region, and autotrophic micro-organisms are virtually absent. Instead, photosynthetic micro-organisms have successfully colonized the interior of halite crusts, which are widespread in the Atacama Desert. These endoevaporitic colonies are an example of life that has adapted to the extreme dryness by colonizing the interior of rocks that provide enhanced moisture conditions. As such, these colonies represent a novel example of potential life on Mars. Here, we present non-destructive Raman spectroscopical identification of these colonies and their organic remnants. Spectral signatures revealed the presence of UV-protective biomolecules as well as light-harvesting pigments pointing to photosynthetic activity. Compounds of biogenic origin identified within these rocks differed depending on the origins of specimens from particular areas in the desert, with differing environmental conditions. Our results also demonstrate the capability of Raman spectroscopy to identify biomarkers within rocks that have a strong astrobiological potential.
Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng
2010-10-01
The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.
Virtual museum of Japanese Buddhist temple features for intercultural communication
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Takao, Hidenobu; Inoue, Tetsuri; Miyamoto, Hiroyuki; Noro, Kageyu
1998-04-01
This paper describes the production and presentation of an experimental virtual museum of Japanese Buddhist art. This medium can provide an easy way to introduce a cultural heritage to people of different cultures. The virtual museum consisted of a multimedia program that included stereoscopic 3D movies of Buddhist statues; binaural 3D sounds of Buddhist ceremonies and the fragrance of incense from the Buddhist temple. The aim was to reproduce both the Buddhist artifacts and atmosphere as realistically as possible.
Hyper-X Stage Separation: Simulation Development and Results
NASA Technical Reports Server (NTRS)
Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.
2001-01-01
This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.
NASA Astrophysics Data System (ADS)
Xu, Yunjun; Remeikas, Charles; Pham, Khanh
2014-03-01
Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.
Virtual reality training and assessment in laparoscopic rectum surgery.
Pan, Jun J; Chang, Jian; Yang, Xiaosong; Liang, Hui; Zhang, Jian J; Qureshi, Tahseen; Howell, Robert; Hickish, Tamas
2015-06-01
Virtual-reality (VR) based simulation techniques offer an efficient and low cost alternative to conventional surgery training. This article describes a VR training and assessment system in laparoscopic rectum surgery. To give a realistic visual performance of interaction between membrane tissue and surgery tools, a generalized cylinder based collision detection and a multi-layer mass-spring model are presented. A dynamic assessment model is also designed for hierarchy training evaluation. With this simulator, trainees can operate on the virtual rectum with both visual and haptic sensation feedback simultaneously. The system also offers surgeons instructions in real time when improper manipulation happens. The simulator has been tested and evaluated by ten subjects. This prototype system has been verified by colorectal surgeons through a pilot study. They believe the visual performance and the tactile feedback are realistic. It exhibits the potential to effectively improve the surgical skills of trainee surgeons and significantly shorten their learning curve. Copyright © 2014 John Wiley & Sons, Ltd.
Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.
Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang
2018-02-01
We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.
A 3D virtual reality simulator for training of minimally invasive surgery.
Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin
2014-01-01
For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.
A Storm's Approach; Hurricane Shelter Training in a Digital Age
NASA Technical Reports Server (NTRS)
Boyarsky, Andrew; Burden, David; Gronstedt, Anders; Jinman, Andrew
2012-01-01
New York City's Office of Emergency Management (OEM) originally ran hundreds of classroom based courses, where they brought together civil servants to learn how to run a Hurricane Shelter (HS). This approach was found to be costly, time consuming and lacked any sense of an impending disaster and need for emergency response. In partnership with the City of New York University School of Professional studies, Gronstedt Group and Daden Limited, the OEM wanted to create a simulation that overcame these issues, providing users with a more immersive and realistic approach at a lower cost. The HS simulation was built in the virtual world Second Life (SL). Virtual worlds are a genre of online communities that often take the form of a computer-based simulated environments, through which users can interact with one another and use or create objects. Using this technology allowed managers to apply their knowledge in both classroom and remote learning environments. The shelter simulation is operational 24/7, guiding users through a 4 1/2 hour narrative from start to finish. This paper will describe the rationale for the project, the technical approach taken - particularly the use of a web based authoring tool to create and manage the immersive simulation, and the results from operational use.
On the (a)symmetry between the perception of time and space in large-scale environments.
Riemer, Martin; Shine, Jonathan P; Wolbers, Thomas
2018-04-23
Cross-dimensional interference between spatial and temporal processing is well documented in humans, but the direction of these interactions remains unclear. The theory of metaphoric structuring states that space is the dominant concept influencing time perception, whereas time has little effect upon the perception of space. In contrast, theories proposing a common neuronal mechanism representing magnitudes argue for a symmetric interaction between space and time perception. Here, we investigated space-time interactions in realistic, large-scale virtual environments. Our results demonstrate a symmetric relationship between the perception of temporal intervals in the supra-second range and room size (experiment 1), but an asymmetric relationship between the perception of travel time and traveled distance (experiment 2). While the perception of time was influenced by the size of virtual rooms and by the distance traveled within these rooms, time itself affected only the perception of room size, but had no influence on the perception of traveled distance. These results are discussed in the context of recent evidence from rodent studies suggesting that subsets of hippocampal place and entorhinal grid cells can simultaneously code for space and time, providing a potential neuronal basis for the interactions between these domains. © 2018 Wiley Periodicals, Inc.
Ali, Saad; Qandeel, Monther; Ramakrishna, Rishi; Yang, Carina W
2018-02-01
Fluoroscopy-guided lumbar puncture (FGLP) is a basic procedural component of radiology residency and neuroradiology fellowship training. Performance of the procedure with limited experience is associated with increased patient discomfort as well as increased radiation dose, puncture attempts, and complication rate. Simulation in health care is a developing field that has potential for enhancing procedural training. We demonstrate the design and utility of a virtual reality simulator for performing FGLP. An FGLP module was developed on an ImmersiveTouch platform, which digitally reproduces the procedural environment with a hologram-like projection. From computed tomography datasets of healthy adult spines, we constructed a 3-D model of the lumbar spine and overlying soft tissues. We assigned different physical characteristics to each tissue type, which the user can experience through haptic feedback while advancing a virtual spinal needle. Virtual fluoroscopy as well as 3-D images can be obtained for procedural planning and guidance. The number of puncture attempts, the distance to the target, the number of fluoroscopic shots, and the approximate radiation dose can be calculated. Preliminary data from users who participated in the simulation were obtained in a postsimulation survey. All users found the simulation to be a realistic replication of the anatomy and procedure and would recommend to a colleague. On a scale of 1-5 (lowest to highest) rating the virtual simulator training overall, the mean score was 4.3 (range 3-5). We describe the design of a virtual reality simulator for performing FGLP and present the initial experience with this new technique. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Dynamic coupling of underactuated manipulators
NASA Astrophysics Data System (ADS)
Bergerman, Marcel; Lee, Christopher; Xu, Yangsheng
1994-08-01
In recent years, researchers have been turning their attention to so called underactuated systems, where the term underactuated refers to the fact that the system has more joints than control actuators. Some examples of underactuated systems are robot manipulators with failed actuators; free-floating space robots, where the base can be considered as a virtual passive linkage in inertia space; legged robots with passive joints; hyper-redundant (snake-like) robots with passive joints, etc. From the examples above, it is possible to justify the importance of the study of underactuated systems. For example, if some actuators of a conventional manipulator fail, the loss of one or more degrees of freedom may compromise an entire operation. In free-floating space systems, the base (satellite) can be considered as a 6-DOF device without positioning actuators. Finally, manipulators with passive joints and hyper-redundant robots with few actuators are important from the viewpoint of energy saving, lightweight design and compactness.
Transfer of Complex Skill Learning from Virtual to Real Rowing
Rauter, Georg; Sigrist, Roland; Koch, Claudio; Crivelli, Francesco; van Raai, Mark; Riener, Robert; Wolf, Peter
2013-01-01
Simulators are commonly used to train complex tasks. In particular, simulators are applied to train dangerous tasks, to save costs, and to investigate the impact of different factors on task performance. However, in most cases, the transfer of simulator training to the real task has not been investigated. Without a proof for successful skill transfer, simulators might not be helpful at all or even counter-productive for learning the real task. In this paper, the skill transfer of complex technical aspects trained on a scull rowing simulator to sculling on water was investigated. We assume if a simulator provides high fidelity rendering of the interactions with the environment even without augmented feedback, training on such a realistic simulator would allow similar skill gains as training in the real environment. These learned skills were expected to transfer to the real environment. Two groups of four recreational rowers participated. One group trained on water, the other group trained on a simulator. Within two weeks, both groups performed four training sessions with the same licensed rowing trainer. The development in performance was assessed by quantitative biomechanical performance measures and by a qualitative video evaluation of an independent, blinded trainer. In general, both groups could improve their performance on water. The used biomechanical measures seem to allow only a limited insight into the rowers' development, while the independent trainer could also rate the rowers' overall impression. The simulator quality and naturalism was confirmed by the participants in a questionnaire. In conclusion, realistic simulator training fostered skill gains to a similar extent as training in the real environment and enabled skill transfer to the real environment. In combination with augmented feedback, simulator training can be further exploited to foster motor learning even to a higher extent, which is subject to future work. PMID:24376518
ERIC Educational Resources Information Center
Koutromanos, George; Styliaras, Georgios; Christodoulou, Sotiris
2015-01-01
The aim of this study was to use the Technology Acceptance Model (TAM) in order to investigate the factors that influence student and in-service teachers' intention to use a spatial hypermedia application, the HyperSea, in their teaching. HyperSea is a modern hypermedia environment that takes advantage of space in order to display content nodes…
NASA Astrophysics Data System (ADS)
Callieri, M.; Debevec, P.; Pair, J.; Scopigno, R.
2005-06-01
Offine rendering techniques have nowadays reached an astonishing level of realism but paying the cost of a long computational time. The new generation of programmable graphic hardware, on the other hand, gives the possibility to implement in realtime some of the visual effects previously available only for cinematographic production. In a collaboration between the Visual Computing Lab (ISTI-CNR) with the Institute for Creative Technologies of the University of Southern California, has been developed a realtime demo that replicate a sequence from the short movie "The Parthenon" presented at Siggraph 2004. The application is designed to run on an immersive reality system, making possible for a user to perceive the virtual environment with a cinematographic visual quality. In this paper we present the principal ideas of the project, discussing design issues and technical solution used for the realtime demo.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Brito, Elcia M S; Piñón-Castillo, Hilda A; Guyoneaud, Rémy; Caretta, César A; Gutiérrez-Corona, J Félix; Duran, Robert; Reyna-López, Georgina E; Nevárez-Moorillón, G Virginia; Fahy, Anne; Goñi-Urriza, Marisol
2013-01-01
Anthropogenic extreme environments are among the most interesting sites for the bioprospection of extremophiles since the selection pressures may favor the presence of microorganisms of great interest for taxonomical and astrobiological research as well as for bioremediation technologies and industrial applications. In this work, T-RFLP and 16S rRNA gene library analyses were carried out to describe the autochthonous bacterial populations from an industrial waste characterized as hyper-alkaline (pH between 9 and 14), hyper-saline (around 100 PSU) and highly contaminated with metals, mainly chromium (from 5 to 18 g kg(-1)) and iron (from 2 to 108 g kg(-1)). Due to matrix interference with DNA extraction, a protocol optimization step was required in order to carry out molecular analyses. The most abundant populations, as evaluated by both T-RFLP and 16S rRNA gene library analyses, were affiliated to Bacillus and Lysobacter genera. Lysobacter related sequences were present in the three samples: solid residue and lixiviate sediments from both dry and wet seasons. Sequences related to Thiobacillus were also found; although strains affiliated to this genus are known to have tolerance to metals, they have not previously been detected in alkaline environments. Together with Bacillus (already described as a metal reducer), such organisms could be of use in bioremediation technologies for reducing chromium, as well as for the prospection of enzymes of biotechnological interest.
"Virtual Cockpit Window" for a Windowless Aerospacecraft
NASA Technical Reports Server (NTRS)
Abernathy, Michael F.
2003-01-01
A software system processes navigational and sensory information in real time to generate a three-dimensional-appearing image of the external environment for viewing by crewmembers of a windowless aerospacecraft. The design of the particular aerospacecraft (the X-38) is such that the addition of a real transparent cockpit window to the airframe would have resulted in unacceptably large increases in weight and cost. When exerting manual control, an aircrew needs to see terrain, obstructions, and other features around the aircraft in order to land safely. The X-38 is capable of automated landing, but even when this capability is utilized, the crew still needs to view the external environment: From the very beginning of the United States space program, crews have expressed profound dislike for windowless vehicles. The wellbeing of an aircrew is considerably promoted by a three-dimensional view of terrain and obstructions. The present software system was developed to satisfy the need for such a view. In conjunction with a computer and display equipment that weigh less than would a real transparent window, this software system thus provides a virtual cockpit window. The key problem in the development of this software system was to create a realistic three-dimensional perspective view that is updated in real time. The problem was solved by building upon a pre-existing commercial program LandForm C3 that combines the speed of flight-simulator software with the power of geographic-information-system software to generate real-time, three-dimensional-appearing displays of terrain and other features of flight environments. In the development of the present software, the pre-existing program was modified to enable it to utilize real-time information on the position and attitude of the aerospacecraft to generate a view of the external world as it would appear to a person looking out through a window in the aerospacecraft. The development included innovations in realistic horizon-limit modeling, three-dimensional stereographic display, and interfaces for utilization of data from inertial-navigation devices, Global Positioning System receivers, and laser rangefinders.
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
AN-CASE NET-CENTRIC modeling and simulation
NASA Astrophysics Data System (ADS)
Baskinger, Patricia J.; Chruscicki, Mary Carol; Turck, Kurt
2009-05-01
The objective of mission training exercises is to immerse the trainees into an environment that enables them to train like they would fight. The integration of modeling and simulation environments that can seamlessly leverage Live systems, and Virtual or Constructive models (LVC) as they are available offers a flexible and cost effective solution to extending the "war-gaming" environment to a realistic mission experience while evolving the development of the net-centric enterprise. From concept to full production, the impact of new capabilities on the infrastructure and concept of operations, can be assessed in the context of the enterprise, while also exposing them to the warfighter. Training is extended to tomorrow's tools, processes, and Tactics, Techniques and Procedures (TTPs). This paper addresses the challenges of a net-centric modeling and simulation environment that is capable of representing a net-centric enterprise. An overview of the Air Force Research Laboratory's (AFRL) Airborne Networking Component Architecture Simulation Environment (AN-CASE) is provide as well as a discussion on how it is being used to assess technologies for the purpose of experimenting with new infrastructure mechanisms that enhance the scalability and reliability of the distributed mission operations environment.
Gomaa, Nasr H; Picó, F Xavier
2011-06-01
Water-limited hot environments are good examples of hyper-aridity. Trees are scarce in these environments but some manage to survive, such as the tree Moringa peregrina. Understanding how trees maintain viable populations in extremely arid environments may provide insight into the adaptive mechanisms by which trees cope with extremely arid weather conditions. This understanding is relevant to the current increasing aridity in several regions of the world. Seed germination experiments were conducted to assess variation in seed mass, seed germination, and seedling traits of Moringa peregrina plants and the correlations among these traits. A seed burial experiment was also designed to study the fate of M. peregrina seeds buried at two depths in the soil for two time periods. On average, seeds germinated in three days and seedling shoots grew 0.7 cm per day over three weeks. Larger seeds decreased germination time and increased seedling growth rates relative to smaller seeds. Seeds remained quiescent in the soil and germination was very high at both depths and burial times. The after-ripening time of Moringa peregrina seeds is short and seeds germinate quickly after imbibition. Plants of M. peregrina may increase in hyper-arid environments from seeds with larger mass, shorter germination times, and faster seedling growth rates. The results also illustrate the adjustment in allocation to seed biomass and correlations among seed and seedling traits that allows M. peregrina to be successful in coping with aridity in its environment.
Pre-layout AC decoupling analysis with Mentor Graphics HyperLynx
NASA Astrophysics Data System (ADS)
Hnatiuc, Mihaela; Iov, Cǎtǎlin J.
2015-02-01
Considerable resources have been used since the humans got interested to discover the world around. Any discovery and science advance was taken tremendously amount of time, money, sometimes lives. All of these define the cost of a discovery, developing process. Getting back to electronics, this field faced in the last 20-30 years, a big boom in terms of technologies and opportunities. Thousands of equipment were developed and placed on the market. The big difference between various competitors is made at the moment by that we call the time to market. A mobile, for instance, has a time to market of around 6 months and the tendency is to have it smaller than that. That means between the concept and the first model sale, no more than 6 months should be passing. That is why new approaches are needed. The one extensively used now is the simulation. We call the simulation virtual prototyping. The virtual prototyping takes into account more than the components only. It takes into account some other project parameters that would affect the final product. Certified tools can handle such analysis. In our paper we present the case of HyperLynx, a concept developed by Mentor Graphics Company, assisting the hardware designer throughout the designing process, from thermal point of view. A test case board was analyzed at the pre-layout stage and the results presented.
Application of virtual reality methods to obesity prevention and management research.
Persky, Susan
2011-03-01
There is a great need for empirical evidence to inform clinical prevention and management of overweight and obesity. Application of virtual reality (VR) methods to this research agenda could present considerable advantages. Use of VR methods in basic and applied obesity prevention and treatment research is currently extremely limited. However, VR has been employed for social and behavioral research in many other domains where it has demonstrated validity and utility. Advantages of VR technologies as research tools include the ability to situate hypothetical research scenarios in realistic settings, tight experimental control inherent in virtual environments, the ability to manipulate and control any and all scenario elements, and enhanced behavioral measurement opportunities. The means by which each of these features could enhance obesity prevention and management research is discussed and illustrated in the context of an example research study. Challenges associated with the application of VR methods, such as technological limitations and cost, are also considered. By employing experimental VR methods to interrogate clinical encounters and other health-related situations, researchers may be able to elucidate causal relationships, strengthen theoretical models, and identify potential targets for intervention. In so doing, researchers stand to make important contributions to evidence-based practice innovation in weight management and obesity prevention. © 2011 Diabetes Technology Society.
Using virtual reality to assess user experience.
Rebelo, Francisco; Noriega, Paulo; Duarte, Emília; Soares, Marcelo
2012-12-01
The aim of this article is to discuss how user experience (UX) evaluation can benefit from the use of virtual reality (VR). UX is usually evaluated in laboratory settings. However, considering that UX occurs as a consequence of the interaction between the product, the user, and the context of use, the assessment of UX can benefit from a more ecological test setting. VR provides the means to develop realistic-looking virtual environments with the advantage of allowing greater control of the experimental conditions while granting good ecological validity. The methods used to evaluate UX, as well as their main limitations, are identified.The currentVR equipment and its potential applications (as well as its limitations and drawbacks) to overcome some of the limitations in the assessment of UX are highlighted. The relevance of VR for UX studies is discussed, and a VR-based framework for evaluating UX is presented. UX research may benefit from a VR-based methodology in the scopes of user research (e.g., assessment of users' expectations derived from their lifestyles) and human-product interaction (e.g., assessment of users' emotions since the first moment of contact with the product and then during the interaction). This article provides knowledge to researchers and professionals engaged in the design of technological interfaces about the usefulness of VR in the evaluation of UX.
Application of Virtual Reality Methods to Obesity Prevention and Management Research
Persky, Susan
2011-01-01
There is a great need for empirical evidence to inform clinical prevention and management of overweight and obesity. Application of virtual reality (VR) methods to this research agenda could present considerable advantages. Use of VR methods in basic and applied obesity prevention and treatment research is currently extremely limited. However, VR has been employed for social and behavioral research in many other domains where it has demonstrated validity and utility. Advantages of VR technologies as research tools include the ability to situate hypothetical research scenarios in realistic settings, tight experimental control inherent in virtual environments, the ability to manipulate and control any and all scenario elements, and enhanced behavioral measurement opportunities. The means by which each of these features could enhance obesity prevention and management research is discussed and illustrated in the context of an example research study. Challenges associated with the application of VR methods, such as technological limitations and cost, are also considered. By employing experimental VR methods to interrogate clinical encounters and other health-related situations, researchers may be able to elucidate causal relationships, strengthen theoretical models, and identify potential targets for intervention. In so doing, researchers stand to make important contributions to evidence-based practice innovation in weight management and obesity prevention. PMID:21527102
Data Visualization Using Immersive Virtual Reality Tools
NASA Astrophysics Data System (ADS)
Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.
2013-01-01
The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.
Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello
2017-11-10
To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.
Virtual reality haptic dissection.
Erolin, Caroline; Wilkinson, Caroline; Soames, Roger
2011-12-01
This project aims to create a three-dimensional digital model of the human hand and wrist which can be virtually 'dissected' through a haptic interface. Tissue properties will be added to the various anatomical structures to replicate a realistic look and feel. The project will explore the role of the medical artist, and investigate cross-discipline collaborations in the field of virtual anatomy. The software will be used to train anatomy students in dissection skills, before experience on a real cadaver. The effectiveness of the software will be evaluated and assessed both quantitatively as well as qualitatively.
Favre, Mônica R; La Mendola, Deborah; Meystre, Julie; Christodoulou, Dimitri; Cochrane, Melissa J; Markram, Henry; Markram, Kamila
2015-01-01
Understanding the effects of environmental stimulation in autism can improve therapeutic interventions against debilitating sensory overload, social withdrawal, fear and anxiety. Here, we evaluate the role of environmental predictability on behavior and protein expression, and inter-individual differences, in the valproic acid (VPA) model of autism. Male rats embryonically exposed (E11.5) either to VPA, a known autism risk factor in humans, or to saline, were housed from weaning into adulthood in a standard laboratory environment, an unpredictably enriched environment, or a predictably enriched environment. Animals were tested for sociability, nociception, stereotypy, fear conditioning and anxiety, and for tissue content of glutamate signaling proteins in the primary somatosensory cortex, hippocampus and amygdala, and of corticosterone in plasma, amygdala and hippocampus. Standard group analyses on separate measures were complemented with a composite emotionality score, using Cronbach's Alpha analysis, and with multivariate profiling of individual animals, using Hierarchical Cluster Analysis. We found that predictable environmental enrichment prevented the development of hyper-emotionality in the VPA-exposed group, while unpredictable enrichment did not. Individual variation in the severity of the autistic-like symptoms (fear, anxiety, social withdrawal and sensory abnormalities) correlated with neurochemical profiles, and predicted their responsiveness to predictability in the environment. In controls, the association between socio-affective behaviors, neurochemical profiles and environmental predictability was negligible. This study suggests that rearing in a predictable environment prevents the development of hyper-emotional features in animals exposed to an autism risk factor, and demonstrates that unpredictable environments can lead to negative outcomes, even in the presence of environmental enrichment.
Favre, Mônica R.; La Mendola, Deborah; Meystre, Julie; Christodoulou, Dimitri; Cochrane, Melissa J.; Markram, Henry; Markram, Kamila
2015-01-01
Understanding the effects of environmental stimulation in autism can improve therapeutic interventions against debilitating sensory overload, social withdrawal, fear and anxiety. Here, we evaluate the role of environmental predictability on behavior and protein expression, and inter-individual differences, in the valproic acid (VPA) model of autism. Male rats embryonically exposed (E11.5) either to VPA, a known autism risk factor in humans, or to saline, were housed from weaning into adulthood in a standard laboratory environment, an unpredictably enriched environment, or a predictably enriched environment. Animals were tested for sociability, nociception, stereotypy, fear conditioning and anxiety, and for tissue content of glutamate signaling proteins in the primary somatosensory cortex, hippocampus and amygdala, and of corticosterone in plasma, amygdala and hippocampus. Standard group analyses on separate measures were complemented with a composite emotionality score, using Cronbach's Alpha analysis, and with multivariate profiling of individual animals, using Hierarchical Cluster Analysis. We found that predictable environmental enrichment prevented the development of hyper-emotionality in the VPA-exposed group, while unpredictable enrichment did not. Individual variation in the severity of the autistic-like symptoms (fear, anxiety, social withdrawal and sensory abnormalities) correlated with neurochemical profiles, and predicted their responsiveness to predictability in the environment. In controls, the association between socio-affective behaviors, neurochemical profiles and environmental predictability was negligible. This study suggests that rearing in a predictable environment prevents the development of hyper-emotional features in animals exposed to an autism risk factor, and demonstrates that unpredictable environments can lead to negative outcomes, even in the presence of environmental enrichment. PMID:26089770
Digitization and Visualization of Greenhouse Tomato Plants in Indoor Environments
Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D.; Fu, Daichang; Xin, Longjiao
2015-01-01
This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor—a Microsoft Kinect—is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants—the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms. PMID:25675284
Digitization and visualization of greenhouse tomato plants in indoor environments.
Li, Dawei; Xu, Lihong; Tan, Chengxiang; Goodman, Erik D; Fu, Daichang; Xin, Longjiao
2015-02-10
This paper is concerned with the digitization and visualization of potted greenhouse tomato plants in indoor environments. For the digitization, an inexpensive and efficient commercial stereo sensor-a Microsoft Kinect-is used to separate visual information about tomato plants from background. Based on the Kinect, a 4-step approach that can automatically detect and segment stems of tomato plants is proposed, including acquisition and preprocessing of image data, detection of stem segments, removing false detections and automatic segmentation of stem segments. Correctly segmented texture samples including stems and leaves are then stored in a texture database for further usage. Two types of tomato plants-the cherry tomato variety and the ordinary variety are studied in this paper. The stem detection accuracy (under a simulated greenhouse environment) for the cherry tomato variety is 98.4% at a true positive rate of 78.0%, whereas the detection accuracy for the ordinary variety is 94.5% at a true positive of 72.5%. In visualization, we combine L-system theory and digitized tomato organ texture data to build realistic 3D virtual tomato plant models that are capable of exhibiting various structures and poses in real time. In particular, we also simulate the growth process on virtual tomato plants by exerting controls on two L-systems via parameters concerning the age and the form of lateral branches. This research may provide useful visual cues for improving intelligent greenhouse control systems and meanwhile may facilitate research on artificial organisms.
Cornwell, Brian R.; Heller, Randi; Biggs, Arter; Pine, Daniel S.; Grillon, Christian
2012-01-01
Objective A detailed understanding of how individuals diagnosed with social anxiety disorder (SAD) respond physiologically under social-evaluative threat is lacking. We aimed to isolate the specific components of public speaking that trigger fear in vulnerable individuals and best discriminate among SAD and healthy individuals. Method Sixteen individuals diagnosed with SAD and 16 healthy individuals were asked to prepare and deliver a short speech in a virtual reality (VR) environment. The VR environment simulated standing center stage before a live audience and allowed us to gradually introduce social cues during speech anticipation. Startle eye-blink responses were elicited periodically by white noise bursts presented during anticipation, speech delivery, and recovery in VR, as well as outside VR during an initial habituation phase. Results SAD individuals reported greater distress and state anxiety than healthy individuals across the entire procedure (ps < .005). Analyses of startle reactivity revealed a robust group difference during speech anticipation in VR, specifically as audience members directed their eye gaze and turned their attention toward participants (p < .05, Bonferroni corrected). Conclusions The VR environment is sufficiently realistic to provoke fear and anxiety in individuals highly vulnerable to socially threatening situations. SAD individuals showed potentiated startle, indicative of a strong phasic fear response, specifically when they perceived themselves as occupying the focus of others' attention as speech time approached. Potentiated startle under social-evaluative threat indexes SAD-related fear of negative evaluation. PMID:21034683
Cho, Dongrae; Ham, Jinsil; Oh, Jooyoung; Park, Jeanho; Kim, Sayup; Lee, Nak-Kyu; Lee, Boreom
2017-10-24
Virtual reality (VR) is a computer technique that creates an artificial environment composed of realistic images, sounds, and other sensations. Many researchers have used VR devices to generate various stimuli, and have utilized them to perform experiments or to provide treatment. In this study, the participants performed mental tasks using a VR device while physiological signals were measured: a photoplethysmogram (PPG), electrodermal activity (EDA), and skin temperature (SKT). In general, stress is an important factor that can influence the autonomic nervous system (ANS). Heart-rate variability (HRV) is known to be related to ANS activity, so we used an HRV derived from the PPG peak interval. In addition, the peak characteristics of the skin conductance (SC) from EDA and SKT variation can also reflect ANS activity; we utilized them as well. Then, we applied a kernel-based extreme-learning machine (K-ELM) to correctly classify the stress levels induced by the VR task to reflect five different levels of stress situations: baseline, mild stress, moderate stress, severe stress, and recovery. Twelve healthy subjects voluntarily participated in the study. Three physiological signals were measured in stress environment generated by VR device. As a result, the average classification accuracy was over 95% using K-ELM and the integrated feature (IT = HRV + SC + SKT). In addition, the proposed algorithm can embed a microcontroller chip since K-ELM algorithm have very short computation time. Therefore, a compact wearable device classifying stress levels using physiological signals can be developed.
Control Room Training for the Hyper-X Project Utilizing Aircraft Simulation
NASA Technical Reports Server (NTRS)
Lux-Baumann, Jesica; Dees, Ray; Fratello, David
2006-01-01
The NASA Dryden Flight Research Center flew two Hyper-X research vehicles and achieved hypersonic speeds over the Pacific Ocean in March and November 2004. To train the flight and mission control room crew, the NASA Dryden simulation capability was utilized to generate telemetry and radar data, which was used in nominal and emergency mission scenarios. During these control room training sessions personnel were able to evaluate and refine data displays, flight cards, mission parameter allowable limits, and emergency procedure checklists. Practice in the mission control room ensured that all primary and backup Hyper-X staff were familiar with the nominal mission and knew how to respond to anomalous conditions quickly and successfully. This report describes the technology in the simulation environment and the Mission Control Center, the need for and benefit of control room training, and the rationale and results of specific scenarios unique to the Hyper-X research missions.
Control Room Training for the Hyper-X Program Utilizing Aircraft Simulation
NASA Technical Reports Server (NTRS)
Lux-Baumann, Jessica R.; Dees, Ray A.; Fratello, David J.
2006-01-01
The NASA Dryden Flight Research Center flew two Hyper-X Research Vehicles and achieved hypersonic speeds over the Pacific Ocean in March and November 2004. To train the flight and mission control room crew, the NASA Dryden simulation capability was utilized to generate telemetry and radar data, which was used in nominal and emergency mission scenarios. During these control room training sessions, personnel were able to evaluate and refine data displays, flight cards, mission parameter allowable limits, and emergency procedure checklists. Practice in the mission control room ensured that all primary and backup Hyper-X staff were familiar with the nominal mission and knew how to respond to anomalous conditions quickly and successfully. This paper describes the technology in the simulation environment and the mission control center, the need for and benefit of control room training, and the rationale and results of specific scenarios unique to the Hyper-X research missions.
Shared virtual environments for aerospace training
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Voss, Mark
1994-01-01
Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.
Benoit, Michel; Guerchouche, Rachid; Petit, Pierre-David; Chapoulie, Emmanuelle; Manera, Valeria; Chaurasia, Gaurav; Drettakis, George; Robert, Philippe
2015-01-01
Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant's home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the "remember/know" procedure. All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P<0.01). The study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy.
Benoit, Michel; Guerchouche, Rachid; Petit, Pierre-David; Chapoulie, Emmanuelle; Manera, Valeria; Chaurasia, Gaurav; Drettakis, George; Robert, Philippe
2015-01-01
Background Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. Methods Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant’s home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the “remember/know” procedure. Results All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P<0.01). Conclusion The study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy. PMID:25834437
The specificity of memory enhancement during interaction with a virtual environment.
Brooks, B M; Attree, E A; Rose, F D; Clifford, B R; Leadbetter, A G
1999-01-01
Two experiments investigated differences between active and passive participation in a computer-generated virtual environment in terms of spatial memory, object memory, and object location memory. It was found that active participants, who controlled their movements in the virtual environment using a joystick, recalled the spatial layout of the virtual environment better than passive participants, who merely watched the active participants' progress. Conversely, there were no significant differences between the active and passive participants' recall or recognition of the virtual objects, nor in their recall of the correct locations of objects in the virtual environment. These findings are discussed in terms of subject-performed task research and the specificity of memory enhancement in virtual environments.
Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.
Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T
2015-03-01
With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.
Tidoni, Emmanuele; Abu-Alqumsan, Mohammad; Leonardis, Daniele; Kapeller, Christoph; Fusco, Gabriele; Guger, Cristoph; Hintermuller, Cristoph; Peer, Angelika; Frisoli, Antonio; Tecchia, Franco; Bergamasco, Massimo; Aglioti, Salvatore Maria
2017-09-01
The development of technological applications that allow people to control and embody external devices within social interaction settings represents a major goal for current and future brain-computer interface (BCI) systems. Prior research has suggested that embodied systems may ameliorate BCI end-user's experience and accuracy in controlling external devices. Along these lines, we developed an immersive P300-based BCI application with a head-mounted display for virtual-local and robotic-remote social interactions and explored in a group of healthy participants the role of proprioceptive feedback in the control of a virtual surrogate (Study 1). Moreover, we compared the performance of a small group of people with spinal cord injury (SCI) to a control group of healthy subjects during virtual and robotic social interactions (Study 2), where both groups received a proprioceptive stimulation. Our attempt to combine immersive environments, BCI technologies and neuroscience of body ownership suggests that providing realistic multisensory feedback still represents a challenge. Results have shown that healthy and people living with SCI used the BCI within the immersive scenarios with good levels of performance (as indexed by task accuracy, optimizations calls and Information Transfer Rate) and perceived control of the surrogates. Proprioceptive feedback did not contribute to alter performance measures and body ownership sensations. Further studies are necessary to test whether sensorimotor experience represents an opportunity to improve the use of future embodied BCI applications.
Validation of virtual reality as a tool to understand and prevent child pedestrian injury.
Schwebel, David C; Gaines, Joanna; Severson, Joan
2008-07-01
In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.
[Virtual reality in neurosurgery].
Tronnier, V M; Staubert, A; Bonsanto, M M; Wirtz, C R; Kunze, S
2000-03-01
Virtual reality enables users to immerse themselves in a virtual three-dimensional world and to interact in this world. The simulation is different from the kind in computer games, in which the viewer is active but acts in a nonrealistic world, or on the TV screen, where we are passively driven in an active world. In virtual reality elements look realistic, they change their characteristics and have almost real-world unpredictability. Virtual reality is not only implemented in gambling dens and the entertainment industry but also in manufacturing processes (cars, furniture etc.), military applications and medicine. Especially the last two areas are strongly correlated, because telemedicine or telesurgery was originated for military reasons to operate on war victims from a secure distance or to perform surgery on astronauts in an orbiting space station. In medicine and especially neurosurgery virtual-reality methods are used for education, surgical planning and simulation on a virtual patient.
An Audio Architecture Integrating Sound and Live Voice for Virtual Environments
2002-09-01
implementation of a virtual environment. As real world training locations become scarce and training budgets are trimmed, training system developers ...look more and more towards virtual environments as the answer. Virtual environments provide training system developers with several key benefits
Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir
2013-01-01
Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Development of a Virtual Reality Assessment of Everyday Living Skills
Ruse, Stacy A.; Davis, Vicki G.; Atkins, Alexandra S.; Krishnan, K. Ranga R.; Fox, Kolleen H.; Harvey, Philip D.; Keefe, Richard S.E.
2014-01-01
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes. Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements. Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning. Current data do not support the recommendation of any single instrument for measurement of functional capacity. The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders. PMID:24798174
G2H--graphics-to-haptic virtual environment development tool for PC's.
Acosta, E; Temkin, B; Krummel, T M; Heinrichs, W L
2000-01-01
For surgical training and preparations, the existing surgical virtual environments have shown great improvement. However, these improvements are more in the visual aspect. The incorporation of haptics into virtual reality base surgical simulations would enhance the sense of realism greatly. To aid in the development of the haptic surgical virtual environment we have created a graphics to haptic, G2H, virtual environment developer tool. G2H transforms graphical virtual environments (created or imported) to haptic virtual environments without programming. The G2H capability has been demonstrated using the complex 3D pelvic model of Lucy 2.0, the Stanford Visible Female. The pelvis was made haptic using G2H without any further programming effort.
The virtual dissecting room: Creating highly detailed anatomy models for educational purposes.
Zilverschoon, Marijn; Vincken, Koen L; Bleys, Ronald L A W
2017-01-01
Virtual 3D models are powerful tools for teaching anatomy. At the present day, there are a lot of different digital anatomy models, most of these commercial applications are based on a 3D model of a human body reconstructed from images with a 1mm intervals. The use of even smaller intervals may result in more details and more realistic appearances of 3D anatomy models. The aim of this study was to create a realistic and highly detailed 3D model of the hand and wrist based on small interval cross-sectional images, suitable for undergraduate and postgraduate teaching purposes with the possibility to perform a virtual dissection in an educational application. In 115 transverse cross-sections from a human hand and wrist, segmentation was done by manually delineating 90 different structures. With the use of Amira the segments were imported and a surface model/polygon model was created, followed by smoothening of the surfaces in Mudbox. In 3D Coat software the smoothed polygon models were automatically retopologied into a quadrilaterals formation and a UV map was added. In Mudbox, the textures from 90 structures were depicted in a realistic way by using photos from real tissue and afterwards height maps, gloss and specular maps were created to add more level of detail and realistic lightning on every structure. Unity was used to build a new software program that would support all the extra map features together with a preferred user interface. A 3D hand model has been created, containing 100 structures (90 at start and 10 extra structures added along the way). The model can be used interactively by changing the transparency, manipulating single or grouped structures and thereby simulating a virtual dissection. This model can be used for a variety of teaching purposes, ranging from undergraduate medical students to residents of hand surgery. Studying the hand and wrist anatomy using this model is cost-effective and not hampered by the limited access to real dissecting facilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Grasping trajectories in a virtual environment adhere to Weber's law.
Ozana, Aviad; Berman, Sigal; Ganel, Tzvi
2018-06-01
Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.
ERIC Educational Resources Information Center
Cheng, Yufang; Huang, Ruowen
2012-01-01
The focus of this study is using data glove to practice Joint attention skill in virtual reality environment for people with pervasive developmental disorder (PDD). The virtual reality environment provides a safe environment for PDD people. Especially, when they made errors during practice in virtual reality environment, there is no suffering or…
An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image
NASA Astrophysics Data System (ADS)
Yu, Zhijie; Yu, Hui; Wang, Chen-sheng
2014-11-01
Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.
ERIC Educational Resources Information Center
O'Connor, Eileen A.; Domingo, Jelia
2017-01-01
With the advent of open source virtual environments, the associated cost reductions, and the more flexible options, avatar-based virtual reality environments are within reach of educators. By using and repurposing readily available virtual environments, instructors can bring engaging, community-building, and immersive learning opportunities to…
The Fidelity of ’Feel’: Emotional Affordance in Virtual Environments
2005-07-01
The Fidelity of “Feel”: Emotional Affordance in Virtual Environments Jacquelyn Ford Morie, Josh Williams, Aimee Dozois, Donat-Pierre Luigi... environment but also the participant. We do this with the focus on what emotional affordances this manipulation will provide. Our first evaluation scenario...emotionally affective VEs. Keywords: Immersive Environments , Virtual Environments , VEs, Virtual Reality, emotion , affordance, fidelity, presence
Coercive Narratives, Motivation and Role Playing in Virtual Worlds
2002-01-01
resource for making immersive virtual environments highly engaging. Interaction also appeals to our natural desire to discover. Reading a book contains...participation in an open-ended Virtual Environment (VE). I intend to take advantage of a participants’ natural tendency to prefer interaction when possible...I hope this work will expand the potential of experience within virtual worlds. K e y w o r d s : Immersive Environments , Virtual Environments
NASA Technical Reports Server (NTRS)
Arias, Adriel (Inventor)
2016-01-01
The main objective of the Holodeck Testbed is to create a cost effective, realistic, and highly immersive environment that can be used to train astronauts, carry out engineering analysis, develop procedures, and support various operations tasks. Currently, the Holodeck testbed allows to step into a simulated ISS (International Space Station) and interact with objects; as well as, perform Extra Vehicular Activities (EVA) on the surface of the Moon or Mars. The Holodeck Testbed is using the products being developed in the Hybrid Reality Lab (HRL). The HRL is combining technologies related to merging physical models with photo-realistic visuals to create a realistic and highly immersive environment. The lab also investigates technologies and concepts that are needed to allow it to be integrated with other testbeds; such as, the gravity offload capability provided by the Active Response Gravity Offload System (ARGOS). My main two duties were to develop and animate models for use in the HRL environments and work on a new way to interface with computers using Brain Computer Interface (BCI) technology. On my first task, I was able to create precise computer virtual tool models (accurate down to the thousandths or hundredths of an inch). To make these tools even more realistic, I produced animations for these tools so they would have the same mechanical features as the tools in real life. The computer models were also used to create 3D printed replicas that will be outfitted with tracking sensors. The sensor will allow the 3D printed models to align precisely with the computer models in the physical world and provide people with haptic/tactile feedback while wearing a VR (Virtual Reality) headset and interacting with the tools. Getting close to the end of my internship the lab bought a professional grade 3D Scanner. With this, I was able to replicate more intricate tools at a much more time-effective rate. The second task was to investigate the use of BCI to control objects inside the hybrid reality ISS environment. This task looked at using an Electroencephalogram (EEG) headset to collect brain state data that could be mapped to commands that a computer could execute. On this Task, I had a setback with the hardware, which stopped working and was returned to the vendor for repair. However, I was still able to collect some data, was able to process it, and started to create correlation algorithms between the electrical patterns in the brain and the commands we wanted the computer to carry out. I also carried out a test to investigate the comfort of the headset if it is worn for a long time. The knowledge gained will benefit me in my future career. I learned how to use various modeling and programming tools that included Blender, Maya, Substance Painter, Artec Studio, Github, and Unreal Engine 4. I learned how to use a professional grade 3D scanner and 3D printer. On the BCI Project I learned about data mining and how to create correlation algorithms. I also supported various demos including a live demo of the hybrid reality lab capabilities at ComicPalooza. This internship has given me a good look into engineering at NASA. I developed a more thorough understanding of engineering and my overall confidence has grown. I have also realized that any problem can be fixed, if you try hard enough, and as an engineer it is your job to not only fix problems but to embrace coming up with solutions to those problems.
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Uterus models for use in virtual reality hysteroscopy simulators.
Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias
2009-05-01
Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.
JIMM: the next step for mission-level models
NASA Astrophysics Data System (ADS)
Gump, Jamieson; Kurker, Robert G.; Nalepka, Joseph P.
2001-09-01
The (Simulation Based Acquisition) SBA process is one in which the planning, design, and test of a weapon system or other product is done through the more effective use of modeling and simulation, information technology, and process improvement. This process results in a product that is produced faster, cheaper, and more reliably than its predecessors. Because the SBA process requires realistic and detailed simulation conditions, it was necessary to develop a simulation tool that would provide a simulation environment acceptable for doing SBA analysis. The Joint Integrated Mission Model (JIMM) was created to help define and meet the analysis, test and evaluation, and training requirements of a Department of Defense program utilizing SBA. Through its generic nature of representing simulation entities, its data analysis capability, and its robust configuration management process, JIMM can be used to support a wide range of simulation applications as both a constructive and a virtual simulation tool. JIMM is a Mission Level Model (MLM). A MLM is capable of evaluating the effectiveness and survivability of a composite force of air and space systems executing operational objectives in a specific scenario against an integrated air and space defense system. Because MLMs are useful for assessing a system's performance in a realistic, integrated, threat environment, they are key to implementing the SBA process. JIMM is a merger of the capabilities of one legacy model, the Suppressor MLM, into another, the Simulated Warfare Environment Generator (SWEG) MLM. By creating a more capable MLM, JIMM will not only be a tool to support the SBA initiative, but could also provide the framework for the next generation of MLMs.
Virtual reality haptic human dissection.
Needham, Caroline; Wilkinson, Caroline; Soames, Roger
2011-01-01
This project aims to create a three-dimensional digital model of the human hand and wrist which can be virtually 'dissected' through a haptic interface. Tissue properties will be added to the various anatomical structures to replicate a realistic look and feel. The project will explore the role of the medical artist and investigate the cross-discipline collaborations required in the field of virtual anatomy. The software will be used to train anatomy students in dissection skills before experience on a real cadaver. The effectiveness of the software will be evaluated and assessed both quantitatively as well as qualitatively.
Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment
NASA Astrophysics Data System (ADS)
Bhagawati, Dwipen
2001-07-01
Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.
Vogt, Tobias; Herpers, Rainer; Askew, Christopher D.; Scherfgen, David; Strüder, Heiko K.; Schneider, Stefan
2015-01-01
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305
Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan
2015-01-01
Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence.
ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work
NASA Astrophysics Data System (ADS)
Knopf, George K.; Nelson, Andrew J.
2004-10-01
Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.
Collaborative Mission Design at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Gough, Kerry M.; Allen, B. Danette; Amundsen, Ruth M.
2005-01-01
NASA Langley Research Center (LaRC) has developed and tested two facilities dedicated to increasing efficiency in key mission design processes, including payload design, mission planning, and implementation plan development, among others. The Integrated Design Center (IDC) is a state-of-the-art concurrent design facility which allows scientists and spaceflight engineers to produce project designs and mission plans in a real-time collaborative environment, using industry-standard physics-based development tools and the latest communication technology. The Mission Simulation Lab (MiSL), a virtual reality (VR) facility focused on payload and project design, permits engineers to quickly translate their design and modeling output into enhanced three-dimensional models and then examine them in a realistic full-scale virtual environment. The authors were responsible for envisioning both facilities and turning those visions into fully operational mission design resources at LaRC with multiple advanced capabilities and applications. In addition, the authors have created a synergistic interface between these two facilities. This combined functionality is the Interactive Design and Simulation Center (IDSC), a meta-facility which offers project teams a powerful array of highly advanced tools, permitting them to rapidly produce project designs while maintaining the integrity of the input from every discipline expert on the project. The concept-to-flight mission support provided by IDSC has shown improved inter- and intra-team communication and a reduction in the resources required for proposal development, requirements definition, and design effort.
ERIC Educational Resources Information Center
Jiman, Juhanita
This paper discusses the use of Virtual Reality (VR) in e-learning environments where an intelligent three-dimensional (3D) virtual person plays the role of an instructor. With the existence of this virtual instructor, it is hoped that the teaching and learning in the e-environment will be more effective and productive. This virtual 3D animated…
Web-based Three-dimensional Virtual Body Structures: W3D-VBS
Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex
2002-01-01
Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495
Web-based three-dimensional Virtual Body Structures: W3D-VBS.
Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex
2002-01-01
Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.
Virtual reality for dermatologic surgery: virtually a reality in the 21st century.
Gladstone, H B; Raugi, G J; Berg, D; Berkley, J; Weghorst, S; Ganter, M
2000-01-01
In the 20th century, virtual reality has predominantly played a role in training pilots and in the entertainment industry. Despite much publicity, virtual reality did not live up to its perceived potential. During the past decade, it has also been applied for medical uses, particularly as training simulators, for minimally invasive surgery. Because of advances in computer technology, virtual reality is on the cusp of becoming an effective medical educational tool. At the University of Washington, we are developing a virtual reality soft tissue surgery simulator. Based on fast finite element modeling and using a personal computer, this device can simulate three-dimensional human skin deformations with real-time tactile feedback. Although there are many cutaneous biomechanical challenges to solve, it will eventually provide more realistic dermatologic surgery training for medical students and residents than the currently used models.
Shader Lamps Virtual Patients: the physical manifestation of virtual patients.
Rivera-Gutierrez, Diego; Welch, Greg; Lincoln, Peter; Whitton, Mary; Cendan, Juan; Chesnutt, David A; Fuchs, Henry; Lok, Benjamin
2012-01-01
We introduce the notion of Shader Lamps Virtual Patients (SLVP) - the combination of projector-based Shader Lamps Avatars and interactive virtual humans. This paradigm uses Shader Lamps Avatars technology to give a 3D physical presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multiple students are involved in the training. We have developed a physical-virtual patient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, conversing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.
Cybersickness and Anxiety During Simulated Motion: Implications for VRET.
Bruck, Susan; Watters, Paul
2009-01-01
Some clinicians have suggested using virtual reality environments to deliver psychological interventions to treat anxiety disorders. However, given a significant body of work on cybersickness symptoms which may arise in virtual environments - especially those involving simulated motion - we tested (a) whether being exposed to a virtual reality environment alone causes anxiety to increase, and (b) whether exposure to simulated motion in a virtual reality environment increases anxiety. Using a repeated measures design, we used Kim's Anxiety Scale questionnaire to compare baseline anxiety, anxiety after virtual environment exposure, and anxiety after simulated motion. While there was no significant effect on anxiety for being in a virtual environment with no simulated motion, the introduction of simulated motion caused anxiety to significantly increase, but not to a severe or extreme level. The implications of this work for virtual reality exposure therapy (VRET) are discussed.
The osmotic stress response of split influenza vaccine particles in an acidic environment.
Choi, Hyo-Jick; Kim, Min-Chul; Kang, Sang-Moo; Montemagno, Carlo D
2014-12-01
Oral influenza vaccine provides an efficient means of preventing seasonal and pandemic disease. In this work, the stability of envelope-type split influenza vaccine particles in acidic environments has been investigated. Owing to the fact that hyper-osmotic stress can significantly affect lipid assembly of vaccine, osmotic stress-induced morphological change of split vaccine particles, in conjunction with structural change of antigenic proteins, was investigated by the use of stopped-flow light scattering (SFLS), intrinsic fluorescence, transmission electron microscopy (TEM), and hemagglutination assay. Split vaccine particles were found to exhibit a step-wise morphological change in response to osmotic stress due to double-layered wall structure. The presence of hyper-osmotic stress in acidic medium (0.3 osmolarity, pH 2.0) induced a significant level of membrane perturbation as measured by SFLS and TEM, imposing more damage to antigenic proteins on vaccine envelope than can be caused by pH-induced conformational change at acidic iso-osmotic condition. Further supports were provided by the intrinsic fluorescence and hemagglutinin activity measurements. Thus, hyper-osmotic stress becomes an important factor for determining stability of split vaccine particles in acidic medium. These results are useful in better understanding the destabilizing mechanism of split influenza vaccine particles in gastric environment and in designing oral influenza vaccine formulations.
Augmented reality in medical education?
Kamphuis, Carolien; Barsom, Esther; Schijven, Marlies; Christoph, Noor
2014-09-01
Learning in the medical domain is to a large extent workplace learning and involves mastery of complex skills that require performance up to professional standards in the work environment. Since training in this real-life context is not always possible for reasons of safety, costs, or didactics, alternative ways are needed to achieve clinical excellence. Educational technology and more specifically augmented reality (AR) has the potential to offer a highly realistic situated learning experience supportive of complex medical learning and transfer. AR is a technology that adds virtual content to the physical real world, thereby augmenting the perception of reality. Three examples of dedicated AR learning environments for the medical domain are described. Five types of research questions are identified that may guide empirical research into the effects of these learning environments. Up to now, empirical research mainly appears to focus on the development, usability and initial implementation of AR for learning. Limited review results reflect the motivational value of AR, its potential for training psychomotor skills and the capacity to visualize the invisible, possibly leading to enhanced conceptual understanding of complex causality.
Virtual endoscopic imaging of the spine.
Kotani, Toshiaki; Nagaya, Shigeyuki; Sonoda, Masaru; Akazawa, Tsutomu; Lumawig, Jose Miguel T; Nemoto, Tetsuharu; Koshi, Takana; Kamiya, Koshiro; Hirosawa, Naoya; Minami, Shohei
2012-05-20
Prospective trial of virtual endoscopy in spinal surgery. To investigate the utility of virtual endoscopy of the spine in conjunction with spinal surgery. Several studies have described clinical applications of virtual endoscopy to visualize the inside of the bronchi, paranasal sinus, stomach, small intestine, pancreatic duct, and bile duct, but, to date, no study has described the use of virtual endoscopy in the spine. Virtual endoscopy is a realistic 3-dimensional intraluminal simulation of tubular structures that is generated by postprocessing of computed tomographic data sets. Five patients with spinal disease were selected: 2 patients with degenerative disease, 2 patients with spinal deformity, and 1 patient with spinal injury. Virtual endoscopy software allows an observer to explore the spinal canal with a mouse, using multislice computed tomographic data. Our study found that virtual endoscopy of the spine has advantages compared with standard imaging methods because surgeons can noninvasively explore the spinal canal in all directions. Virtual endoscopy of the spine may be useful to surgeons for diagnosis, preoperative planning, and postoperative assessment by obviating the need to mentally construct a 3-dimensional picture of the spinal canal from 2-dimensional computed tomographic scans.
den Brok, W L J E; Sterkenburg, P S
2015-01-01
Persons with an autism spectrum disorder and/or intellectual disability have difficulties in processing information, which impedes the learning of daily living skills and cognitive concepts. Technological aids support learning, and if used temporarily and in a self-controlled manner, they may contribute to independent societal participation. This systematic review examines the studies that applied self-controlled technologies. The 28 relevant studies showed that skills and concepts are learned through prompting, interaction with devices, and practicing in (realistic) virtual environments. For attaining cognitive concepts, advanced technologies such as virtual reality are effective. Five studies focussed on cognitive concepts and two on emotion concepts. More research is necessary to examine the generalization of results and effect of using technology for learning cognitive and emotional concepts. Implications for Rehabilitation Persons with a moderate to mild intellectual disability and/or with autism can use self-controlled technology to learn new activities of daily living and cognitive concepts (e.g. time perception and imagination). Specific kinds of technologies can be used to learn specific kinds of skills (e.g. videos on computers or handheld devices for daily living skills; Virtual Reality for time perception and emotions of others). For learning new cognitive concepts it is advisable to use more advanced technologies as they have the potential to offer more features to support learning.
Virtual Reality Astronomy Education Using AAS WorldWide Telescope and Oculus Rift
NASA Astrophysics Data System (ADS)
Weigel, A. David; Moraitis, Christina D.
2017-01-01
The Boyd E. Christenberry Planetarium at Samford University (Birmingham, AL) offers family friendly, live, and interactive planetarium presentations that educate the public on topics from astronomy basics to current cutting edge astronomical discoveries. With limited funding, it is not possible to provide state of the art planetarium hardware for these community audiences. In a society in which many people, even young children, have access to high resolution smart phones and highly realistic video games, it is important to leverage cutting-edge technology to intrigue young and old minds alike. We use an Oculus Rift virtual reality headset running AAS WorldWide Telescope software to visualize 3D data in a fully immersive environment. We create interactive experiences and videos to highlight astronomical concepts and also to communicate the beauty of our universe. The ease of portability enables us to set up at Virtual Reality (VR) experience at various events, festivals, and even in classrooms to provide a community outreach that a fixed planetarium cannot. This VR experience adds the “wow” factor that encourages children and adults to engage in our various planetarium events to learn more about astronomy and continue to explore the final frontier of space. These VR experiences encourages our college students to participate in our astronomy education resulting in increased interest in STEM fields, particularly physics and math.
3D Hybrid Simulations of Interactions of High-Velocity Plasmoids with Obstacles
NASA Astrophysics Data System (ADS)
Omelchenko, Y. A.; Weber, T. E.; Smith, R. J.
2015-11-01
Interactions of fast plasma streams and objects with magnetic obstacles (dipoles, mirrors, etc) lie at the core of many space and laboratory plasma phenomena ranging from magnetoshells and solar wind interactions with planetary magnetospheres to compact fusion plasmas (spheromaks and FRCs) to astrophysics-in-lab experiments. Properly modeling ion kinetic, finite-Larmor radius and Hall effects is essential for describing large-scale plasma dynamics, turbulence and heating in complex magnetic field geometries. Using an asynchronous parallel hybrid code, HYPERS, we conduct 3D hybrid (particle-in-cell ion, fluid electron) simulations of such interactions under realistic conditions that include magnetic flux coils, ion-ion collisions and the Chodura resistivity. HYPERS does not step simulation variables synchronously in time but instead performs time integration by executing asynchronous discrete events: updates of particles and fields carried out as frequently as dictated by local physical time scales. Simulations are compared with data from the MSX experiment which studies the physics of magnetized collisionless shocks through the acceleration and subsequent stagnation of FRC plasmoids against a strong magnetic mirror and flux-conserving boundary.
On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments
NASA Astrophysics Data System (ADS)
Çöltekin, A.; Lokka, I.; Zahner, M.
2016-06-01
Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
Reaction time for processing visual stimulus in a computer-assisted rehabilitation environment.
Sanchez, Yerly; Pinzon, David; Zheng, Bin
2017-10-01
To examine the reaction time when human subjects process information presented in the visual channel under both a direct vision and a virtual rehabilitation environment when walking was performed. Visual stimulus included eight math problems displayed on the peripheral vision to seven healthy human subjects in a virtual rehabilitation training (computer-assisted rehabilitation environment (CAREN)) and a direct vision environment. Subjects were required to verbally report the results of these math calculations in a short period of time. Reaction time measured by Tobii Eye tracker and calculation accuracy were recorded and compared between the direct vision and virtual rehabilitation environment. Performance outcomes measured for both groups included reaction time, reading time, answering time and the verbal answer score. A significant difference between the groups was only found for the reaction time (p = .004). Participants had more difficulty recognizing the first equation of the virtual environment. Participants reaction time was faster in the direct vision environment. This reaction time delay should be kept in mind when designing skill training scenarios in virtual environments. This was a pilot project to a series of studies assessing cognition ability of stroke patients who are undertaking a rehabilitation program with a virtual training environment. Implications for rehabilitation Eye tracking is a reliable tool that can be employed in rehabilitation virtual environments. Reaction time changes between direct vision and virtual environment.
Baumgartner, Thomas; Valko, Lilian; Esslen, Michaela; Jäncke, Lutz
2006-02-01
Using electroencephalography (EEG), psychophysiology, and psychometric measures, this is the first study which investigated the neurophysiological underpinnings of spatial presence. Spatial presence is considered a sense of being physically situated within a spatial environment portrayed by a medium (e.g., television, virtual reality). Twelve healthy children and 11 healthy adolescents were watching different virtual roller coaster scenarios. During a control session, the roller coaster cab drove through a horizontal roundabout track. The following realistic roller coaster rides consisted of spectacular ups, downs, and loops. Low-resolution brain electromagnetic tomography (LORETA) and event-related desynchronization (ERD) were used to analyze the EEG data. As expected, we found that, compared to the control condition, experiencing a virtual roller coaster ride evoked in both groups strong SP experiences, increased electrodermal reactions, and activations in parietal brain areas known to be involved in spatial navigation. In addition, brain areas that receive homeostatic afferents from somatic and visceral sensations of the body were strongly activated. Most interesting, children (as compared to adolescents) reported higher spatial presence experiences and demonstrated a different frontal activation pattern. While adolescents showed increased activation in prefrontal areas known to be involved in the control of executive functions, children demonstrated a decreased activity in these brain regions. Interestingly, recent neuroanatomical and neurophysiological studies have shown that the frontal brain continues to develop to adult status well into adolescence. Thus, the result of our study implies that the increased spatial presence experience in children may result from the not fully developed control functions of the frontal cortex.
Fleming, Michael; Olsen, Dale; Stathes, Hilary; Boteler, Laura; Grossberg, Paul; Pfeifer, Judie; Schiro, Stephanie; Banning, Jane; Skochelak, Susan
2009-01-01
Educating physicians and other health care professionals about the identification and treatment of patients who drink more than recommended limits is an ongoing challenge. An educational randomized controlled trial was conducted to test the ability of a stand-alone training simulation to improve the clinical skills of health care professionals in alcohol screening and intervention. The "virtual reality simulation" combined video, voice recognition, and nonbranching logic to create an interactive environment that allowed trainees to encounter complex social cues and realistic interpersonal exchanges. The simulation included 707 questions and statements and 1207 simulated patient responses. A sample of 102 health care professionals (10 physicians; 30 physician assistants or nurse practitioners; 36 medical students; 26 pharmacy, physican assistant, or nurse practitioner students) were randomly assigned to a no training group (n = 51) or a computer-based virtual reality intervention (n = 51). Professionals in both groups had similar pretest standardized patient alcohol screening skill scores: 53.2 (experimental) vs 54.4 (controls), 52.2 vs 53.7 alcohol brief intervention skills, and 42.9 vs 43.5 alcohol referral skills. After repeated practice with the simulation there were significant increases in the scores of the experimental group at 6 months after randomization compared with the control group for the screening (67.7 vs 58.1; P < .001) and brief intervention (58.3 vs 51.6; P < .04) scenarios. The technology tested in this trial is the first virtual reality simulation to demonstrate an increase in the alcohol screening and brief intervention skills of health care professionals.
Fleming, Michael; Olsen, Dale; Stathes, Hilary; Boteler, Laura; Grossberg, Paul; Pfeifer, Judie; Schiro, Stephanie; Banning, Jane; Skochelak, Susan
2009-01-01
Background Educating physicians and other health care professionals to identify and treat patients who drink above recommended limits is an ongoing challenge. Methods An educational Randomized Control Trial (RCT) was conducted to test the ability of a stand alone training simulation to improve the clinical skills of health care professionals in alcohol screening and intervention. The “virtual reality simulation” combines video, voice recognition and non branching logic to create an interactive environment that allows trainees to encounter complex social cues and realistic interpersonal exchanges. The simulation includes 707 questions and statements and 1207 simulated patient responses. Results A sample of 102 health care professionals (10 physicians; 30 physician assistants [PAs] or nurse practitioners [NPs]; 36 medical students; 26 pharmacy, PA or NP students) were randomly assigned to no training (n=51) or a computer based virtual reality intervention (n=51). Subjects in both groups had similar pre-test standardized patient alcohol screening skill scores – 53.2 (experimental) vs. 54.4 (controls), 52.2 vs. 53.7 alcohol brief intervention skills, and 42.9 vs. 43.5 alcohol referral skills. Following repeated practice with the simulation there were significant increases in the scores of the experimental group at 6 months post-randomization compared to the control group for the screening (67.7 vs. 58.1, p<.001) and brief intervention (58.3 vs. 51.6, p<.04) scenarios. Conclusions The technology tested in this trial is the first virtual reality simulation to demonstrate an increase in the alcohol screening and brief intervention skills of health care professionals. PMID:19587253
Advanced 3-dimensional planning in neurosurgery.
Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni
2013-01-01
During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
Brain Activity on Navigation in Virtual Environments.
ERIC Educational Resources Information Center
Mikropoulos, Tassos A.
2001-01-01
Assessed the cognitive processing that takes place in virtual environments by measuring electrical brain activity using Fast Fourier Transform analysis. University students performed the same task in a real and a virtual environment, and eye movement measurements showed that all subjects were more attentive when navigating in the virtual world.…
Eglin virtual range database for hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth
1998-07-01
Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.
2017-08-08
Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...navigate through a virtual environment. The wand interface provides a significantly improved means of interaction. This study quantitatively measures the
NASA Technical Reports Server (NTRS)
Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry
1997-01-01
This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
A virtual therapeutic environment with user projective agents.
Ookita, S Y; Tokuda, H
2001-02-01
Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.
Strangeness production in heavy ion collisions -Constraining the KN - potential in medium
NASA Astrophysics Data System (ADS)
Leifels, Yvonne; FOPI Collaboration
2013-03-01
We review the strangeness production in heavy ion collisions at energies around the NN production threshold and discuss recent measurements of the FOPI collaboration on charged kaon flow over a wide impact parameter range. The data are compared to comprehensive state-of-the-art transport models. The dense nuclear matter environment produced in those collisions may provide unique opportunities to form strange few body systems. The FOPI detector is especially suited to reconstruct such states by their charged particle decays. Apart from strongly decaying states special emphasis will be put on the search for long living weakly decaying states, i.e. Hyper-Nuclei. Light hyper nuclei are reconstructed by their two body decay channels and the production of Hyper-Tritons is studied with respect to Λ and t(3He).
Camp, Christopher L
2018-05-01
Although we have come a long way, the rapidly expanding field of virtual reality simulation for arthroscopic surgical skills acquisition is supported by only a limited amount of evidence. That said, the good news is that the evidence suggests that simulator experience translates into improved performance in the operating room. If proving this relation is our ultimate goal, more work is certainly needed. In this commentary, a "Task List" is proposed for surgeons and educators interested in using simulators and better defining their role in resident education. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
VLSI Design of Trusted Virtual Sensors.
Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada
2018-01-25
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).
VLSI Design of Trusted Virtual Sensors
2018-01-01
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141
Ecological validity of virtual environments to assess human navigation ability
van der Ham, Ineke J. M.; Faber, Annemarie M. E.; Venselaar, Matthijs; van Kreveld, Marc J.; Löffler, Maarten
2015-01-01
Route memory is frequently assessed in virtual environments. These environments can be presented in a fully controlled manner and are easy to use. Yet they lack the physical involvement that participants have when navigating real environments. For some aspects of route memory this may result in reduced performance in virtual environments. We assessed route memory performance in four different environments: real, virtual, virtual with directional information (compass), and hybrid. In the hybrid environment, participants walked the route outside on an open field, while all route information (i.e., path, landmarks) was shown simultaneously on a handheld tablet computer. Results indicate that performance in the real life environment was better than in the virtual conditions for tasks relying on survey knowledge, like pointing to start and end point, and map drawing. Performance in the hybrid condition however, hardly differed from real life performance. Performance in the virtual environment did not benefit from directional information. Given these findings, the hybrid condition may offer the best of both worlds: the performance level is comparable to that of real life for route memory, yet it offers full control of visual input during route learning. PMID:26074831
Using the PhysX engine for physics-based virtual surgery with force feedback.
Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu
2009-09-01
The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.
ERIC Educational Resources Information Center
Ehrlich, Justin
2010-01-01
The application of virtual reality is becoming ever more important as technology reaches new heights allowing virtual environments (VE) complete with global illumination. One successful application of virtual environments is educational interventions meant to treat individuals with autism spectrum disorder (ASD). VEs are effective with these…
Virtual Virtuosos: A Case Study in Learning Music in Virtual Learning Environments in Spain
ERIC Educational Resources Information Center
Alberich-Artal, Enric; Sangra, Albert
2012-01-01
In recent years, the development of Information and Communication Technologies (ICT) has contributed to the generation of a number of interesting initiatives in the field of music education and training in virtual learning environments. However, music education initiatives employing virtual learning environments have replicated and perpetuated the…
Transfer of motor learning from virtual to natural environments in individuals with cerebral palsy.
de Mello Monteiro, Carlos Bandeira; Massetti, Thais; da Silva, Talita Dias; van der Kamp, John; de Abreu, Luiz Carlos; Leone, Claudio; Savelsbergh, Geert J P
2014-10-01
With the growing accessibility of computer-assisted technology, rehabilitation programs for individuals with cerebral palsy (CP) increasingly use virtual reality environments to enhance motor practice. Thus, it is important to examine whether performance improvements in the virtual environment generalize to the natural environment. To examine this issue, we had 64 individuals, 32 of which were individuals with CP and 32 typically developing individuals, practice two coincidence-timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key. In the more abstract, less tangible task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment. The results showed that individuals with CP timed less accurate than typically developing individuals, especially for the more abstract task in the virtual environment. The individuals with CP did-as did their typically developing peers-improve coincidence timing with practice on both tasks. Importantly, however, these improvements were specific to the practice environment; there was no transfer of learning. It is concluded that the implementation of virtual environments for motor rehabilitation in individuals with CP should not be taken for granted but needs to be considered carefully. Copyright © 2014 Elsevier Ltd. All rights reserved.
Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.
Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor
2008-03-01
To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.
D Visualization for Virtual Museum Development
NASA Astrophysics Data System (ADS)
Skamantzari, M.; Georgopoulos, A.
2016-06-01
The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.
ERIC Educational Resources Information Center
O'Connor, Eileen A.
2015-01-01
Opening with the history, recent advances, and emerging ways to use avatar-based virtual reality, an instructor who has used virtual environments since 2007 shares how these environments bring more options to community building, teaching, and education. With the open-source movement, where the source code for virtual environments was made…
Perturbed Communication in a Virtual Environment to Train Medical Team Leaders.
Huguet, Lauriane; Lourdeaux, Domitile; Sabouret, Nicolas; Ferrer, Marie-Hélène
2016-01-01
The VICTEAMS project aims at designing a virtual environment for training medical team leaders to non-technical skills. The virtual environment is populated with autonomous virtual agents who are able to make mistakes (in action or communication) in order to train rescue team leaders and to make them adaptive with all kinds of situations or teams.
Virtual reality in laparoscopic surgery.
Uranüs, Selman; Yanik, Mustafa; Bretthauer, Georg
2004-01-01
Although the many advantages of laparoscopic surgery have made it an established technique, training in laparoscopic surgery posed problems not encountered in conventional surgical training. Virtual reality simulators open up new perspectives for training in laparoscopic surgery. Under realistic conditions in real time, trainees can tailor their sessions with the VR simulator to suit their needs and goals, and can repeat exercises as often as they wish. VR simulators reduce the number of experimental animals needed for training purposes and are suited to the pursuit of research in laparoscopic surgery.
Aerospace applications of virtual environment technology.
Loftin, R B
1996-11-01
The uses of virtual environment technology in the space program are examined with emphasis on training for the Hubble Space Telescope Repair and Maintenance Mission in 1993. Project ScienceSpace at the Virtual Environment Technology Lab is discussed.
Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
Social Interaction Development through Immersive Virtual Environments
ERIC Educational Resources Information Center
Beach, Jason; Wendt, Jeremy
2014-01-01
The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…
Al-Jasmi, Fatma; Moldovan, Laura; Clarke, Joe T R
2010-10-25
Computer-based teaching (CBT) is a well-known educational device, but it has never been applied systematically to the teaching of a complex, rare, genetic disease, such as Hunter disease (MPS II). To develop interactive teaching software functioning as a virtual clinic for the management of MPS II. The Hunter disease eClinic, a self-training, user-friendly educational software program, available at the Lysosomal Storage Research Group (http://www.lysosomalstorageresearch.ca), was developed using the Adobe Flash multimedia platform. It was designed to function both to provide a realistic, interactive virtual clinic and instantaneous access to supporting literature on Hunter disease. The Hunter disease eClinic consists of an eBook and an eClinic. The eClinic is the interactive virtual clinic component of the software. Within an environment resembling a real clinic, the trainee is instructed to perform a medical history, to examine the patient, and to order appropriate investigation. The program provides clinical data derived from the management of actual patients with Hunter disease. The eBook provides instantaneous, electronic access to a vast collection of reference information to provide detailed background clinical and basic science, including relevant biochemistry, physiology, and genetics. In the eClinic, the trainee is presented with quizzes designed to provide immediate feedback on both trainee effectiveness and efficiency. User feedback on the merits of the program was collected at several seminars and formal clinical rounds at several medical centres, primarily in Canada. In addition, online usage statistics were documented for a 2-year period. Feedback was consistently positive and confirmed the practical benefit of the program. The online English-language version is accessed daily by users from all over the world; a Japanese translation of the program is also available. The Hunter disease eClinic employs a CBT model providing the trainee with realistic clinical problems, coupled with comprehensive basic and clinical reference information by instantaneous access to an electronic textbook, the eBook. The program was rated highly by attendees at national and international presentations. It provides a potential model for use as an educational approach to other rare genetic diseases.
A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banaś, W.
2016-08-01
The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.
Virtual environments simulation in research reactor
NASA Astrophysics Data System (ADS)
Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin
2017-01-01
Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.
Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2017-02-01
This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
The GenTechnique Project: Developing an Open Environment for Learning Molecular Genetics.
ERIC Educational Resources Information Center
Calza, R. E.; Meade, J. T.
1998-01-01
The GenTechnique project at Washington State University uses a networked learning environment for molecular genetics learning. The project is developing courseware featuring animation, hyper-link controls, and interactive self-assessment exercises focusing on fundamental concepts. The first pilot course featured a Web-based module on DNA…
Teaching and Learning with Flexible Hypermedia Learning Environments.
ERIC Educational Resources Information Center
Wedekind, Joachim; Lechner, Martin; Tergan, Sigmar-Olaf
This paper presents an approach for developing flexible Hypermedia Learning Environments (HMLE) and applies this theoretical framework to the creation of a layered model of a hypermedia system, called HyperDisc, developed at the German Institute for Research on Distance Education. The first section introduces HMLE and suggests that existing…
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
Evaluation of Loudspeaker-Based Virtual Sound Environments for Testing Directional Hearing Aids.
Oreinos, Chris; Buchholz, Jörg M
2016-07-01
Assessments of hearing aid (HA) benefits in the laboratory often do not accurately reflect real-life experience. This may be improved by employing loudspeaker-based virtual sound environments (VSEs) that provide more realistic acoustic scenarios. It is unclear how far the limited accuracy of these VSEs influences measures of subjective performance. Verify two common methods for creating VSEs that are to be used for assessing HA outcomes. A cocktail-party scene was created inside a meeting room and then reproduced with a 41-channel loudspeaker array inside an anechoic chamber. The reproduced scenes were created either by using room acoustic modeling techniques or microphone array recordings. Participants were 18 listeners with a symmetrical, sloping, mild-to-moderate hearing loss, aged between 66 and 78 yr (mean = 73.8 yr). The accuracy of the two VSEs was assessed by comparing the subjective performance measured with two-directional HA algorithms inside all three acoustic environments. The performance was evaluated by using a speech intelligibility test and an acceptable noise level task. The general behavior of the subjective performance seen in the real environment was preserved in the two VSEs for both directional HA algorithms. However, the estimated directional benefits were slightly reduced in the model-based VSE, and further reduced in the recording-based VSE. It can be concluded that the considered VSEs can be used for testing directional HAs, but the provided sensitivity is reduced when compared to a real environment. This can result in an underestimation of the provided directional benefit. However, this minor limitation may be easily outweighed by the high realism of the acoustic scenes that these VSEs can generate, which may result in HA outcome measures with a significantly higher ecological relevance than provided by measures commonly performed in the laboratory or clinic. American Academy of Audiology.
Butterfly valve in a virtual environment
NASA Astrophysics Data System (ADS)
Talekar, Aniruddha; Patil, Saurabh; Thakre, Prashant; Rajkumar, E.
2017-11-01
Assembly of components is one of the processes involved in product design and development. The present paper deals with the assembly of a simple butterfly valve components in a virtual environment. The assembly has been carried out using virtual reality software by trial and error methods. The parts are modelled using parametric software (SolidWorks), meshed accordingly, and then called into virtual environment for assembly.
Ergonomic aspects of a virtual environment.
Ahasan, M R; Väyrynen, S
1999-01-01
A virtual environment is an interactive graphic system mediated through computer technology that allows a certain level of reality or a sense of presence to access virtual information. To create reality in a virtual environment, ergonomics issues are explored in this paper, aiming to develop the design of presentation formats with related information, that is possible to attain and to maintain user-friendly application.
Sarver, Nina Wong; Beidel, Deborah C; Spitalnick, Josh S
2014-01-01
Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high-quality program overall. In addition, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Findings indicate that the virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder.
Human Rights and Private Ordering in Virtual Worlds
NASA Astrophysics Data System (ADS)
Oosterbaan, Olivier
This paper explores the application of human rights in (persistent) virtual world environments. The paper begins with describing a number of elements that most virtual environments share and that are relevant for the application of human rights in such a setting; and by describing in a general nature the application of human rights between private individuals. The paper then continues by discussing the application in virtual environments of two universally recognized human rights, namely freedom of expression, and freedom from discrimination. As these specific rights are discussed, a number of more general conclusions on the application of human rights in virtual environments are drawn. The first general conclusion being that, because virtual worlds are private environments, participants are subject to private ordering. The second general conclusion being that participants and non-participants alike have to accept at times that in-world expressions are to an extent private speech. The third general conclusion is that, where participants represent themselves in-world, other participants cannot assume that such in-world representation share the characteristics of the human player; and that where virtual environments contain game elements, participants and non-participants alike should not take everything that happens in the virtual environment at face value or literally, which does however not amount to having to accept a higher level of infringement on their rights for things that happen in such an environment.
Wong, Nina; Beidel, Deborah C.; Spitalnick, Josh
2013-01-01
Objective Two significant challenges for the dissemination of social skills training programs are the need to assure generalizability and provide sufficient practice opportunities. In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This study evaluated the utility of an interactive virtual school environment for the treatment of social anxiety disorder in preadolescent children. Method Eleven children with a primary diagnosis of social anxiety disorder between 8 to 12 years old participated in this initial feasibility trial. All children were treated with Social Effectiveness Therapy for Children, an empirically supported treatment for children with social anxiety disorder. However, the in vivo peer generalization sessions and standard parent-assisted homework assignments were substituted by practice in a virtual environment. Results Overall, the virtual environment programs were acceptable, feasible, and credible treatment components. Both children and clinicians were satisfied with using the virtual environment technology, and children believed it was a high quality program overall. Additionally, parents were satisfied with the virtual environment augmented treatment and indicated that they would recommend the program to family and friends. Conclusion Virtual environments are viewed as acceptable and credible by potential recipients. Furthermore, they are easy to implement by even novice users and appear to be useful adjunctive elements for the treatment of childhood social anxiety disorder. PMID:24144182
Prediction of Hyper-X Stage Separation Aerodynamics Using CFD
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Wong, Tin-Chee; Dilley, Arthur D.; Pao, Jenn L.
2000-01-01
The NASA X-43 "Hyper-X" hypersonic research vehicle will be boosted to a Mach 7 flight test condition mounted on the nose of an Orbital Sciences Pegasus launch vehicle. The separation of the research vehicle from the Pegasus presents some unique aerodynamic problems, for which computational fluid dynamics has played a role in the analysis. This paper describes the use of several CFD methods for investigating the aerodynamics of the research and launch vehicles in close proximity. Specifically addressed are unsteady effects, aerodynamic database extrapolation, and differences between wind tunnel and flight environments.
Cross-species 3D virtual reality toolbox for visual and cognitive experiments.
Doucet, Guillaume; Gulli, Roberto A; Martinez-Trujillo, Julio C
2016-06-15
Although simplified visual stimuli, such as dots or gratings presented on homogeneous backgrounds, provide strict control over the stimulus parameters during visual experiments, they fail to approximate visual stimulation in natural conditions. Adoption of virtual reality (VR) in neuroscience research has been proposed to circumvent this problem, by combining strict control of experimental variables and behavioral monitoring within complex and realistic environments. We have created a VR toolbox that maximizes experimental flexibility while minimizing implementation costs. A free VR engine (Unreal 3) has been customized to interface with any control software via text commands, allowing seamless introduction into pre-existing laboratory data acquisition frameworks. Furthermore, control functions are provided for the two most common programming languages used in visual neuroscience: Matlab and Python. The toolbox offers milliseconds time resolution necessary for electrophysiological recordings and is flexible enough to support cross-species usage across a wide range of paradigms. Unlike previously proposed VR solutions whose implementation is complex and time-consuming, our toolbox requires minimal customization or technical expertise to interface with pre-existing data acquisition frameworks as it relies on already familiar programming environments. Moreover, as it is compatible with a variety of display and input devices, identical VR testing paradigms can be used across species, from rodents to humans. This toolbox facilitates the addition of VR capabilities to any laboratory without perturbing pre-existing data acquisition frameworks, or requiring any major hardware changes. Copyright © 2016 Z. All rights reserved.
NASA Astrophysics Data System (ADS)
Beavis, Andrew W.; Ward, James W.
2014-03-01
Purpose: In recent years there has been interest in using Computer Simulation within Medical training. The VERT (Virtual Environment for Radiotherapy Training) system is a Flight Simulator for Radiation Oncology professionals, wherein fundamental concepts, techniques and problematic scenarios can be safely investigated. Methods: The system provides detailed simulations of several Linacs and the ability to display DICOM treatment plans. Patients can be mis-positioned with 'set-up errors' which can be explored visually, dosimetrically and using IGRT. Similarly, a variety of Linac calibration and configuration parameters can be altered manually or randomly via controlled errors in the simulated 3D Linac and its component parts. The implication of these can be investigated by following through a treatment scenario or using QC devices available within a Physics software module. Results: One resultant exercise is a systematic mis-calibration of 'lateral laser height' by 2mm. The offset in patient alignment is easily identified using IGRT and once corrected by reference to the 'in-room monitor'. The dosimetric implication is demonstrated to be 0.4% by setting a dosimetry phantom by the lasers (and ignoring TSD information). Finally, the need for recalibration can be shown by the Laser Alignment Phantom or by reference to the front pointer. Conclusions: The VERT system provides a realistic environment for training and enhancing understanding of radiotherapy concepts and techniques. Linac error conditions can be explored in this context and valuable experience gained in a controlled manner in a compressed period of time.
Stupar-Rutenfrans, Snežana; Ketelaars, Loes E H; van Gisbergen, Marnix S
2017-10-01
With this article, we aim to increase our understanding of how mobile virtual reality exposure therapy (VRET) can help reduce speaking anxiety. Using the results of a longitudinal study, we examined the effect of a new VRET strategy (Public Speech Trainer, PST), that incorporates 360° live recorded VR environments, on the reduction of public speaking anxiety. The PST was developed as a 360° smartphone application for a VR head-mounted device that participants could use at home. Realistic anxiety experiences were created by means of live 360° video recordings of a lecture hall containing three training sessions based on graded exposure framework; empty classroom (a) and with a small (b) and large audience (c). Thirty-five students participated in all sessions using PST. Anxiety levels were measured before and after each session over a period of 4 weeks. As expected, speaking anxiety significantly decreased after the completion of all PST sessions, and the decrement was the strongest in participants with initially high speaking anxiety baseline levels. Results also revealed that participants with moderate and high speaking anxiety baseline level differ in the anxiety state pattern over time. Conclusively and in line with habituation theory, the results supported the notion that VRET is more effective when aimed at reducing high-state anxiety levels. Further implications for future research and improvement of current VRET strategies are discussed.
NASA Astrophysics Data System (ADS)
Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong
2018-02-01
Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.
2009-03-20
involved the development of an environment within the Multiverse virtual world, oriented toward allowing individuals to acquire and reinforce skills via...PetBrain software G2: Creation of a scavenger hunt scenario in the Multiverse virtual world, in which humans and AIs can collaboratively play scavenger...carried out by Novamente LLC for AOARD during June 2008 ? February 2009. It involved the development of an environment within the Multiverse virtual world
Validation of smoking-related virtual environments for cue exposure therapy.
García-Rodríguez, Olaya; Pericot-Valverde, Irene; Gutiérrez-Maldonado, José; Ferrer-García, Marta; Secades-Villa, Roberto
2012-06-01
Craving is considered one of the main factors responsible for relapse after smoking cessation. Cue exposure therapy (CET) consists of controlled and repeated exposure to drug-related stimuli in order to extinguish associated responses. The main objective of this study was to assess the validity of 7 virtual reality environments for producing craving in smokers that can be used within the CET paradigm. Forty-six smokers and 44 never-smokers were exposed to 7 complex virtual environments with smoking-related cues that reproduce typical situations in which people smoke, and to a neutral virtual environment without smoking cues. Self-reported subjective craving and psychophysiological measures were recorded during the exposure. All virtual environments with smoking-related cues were able to generate subjective craving in smokers, while no increase was observed for the neutral environment. The most sensitive psychophysiological variable to craving increases was heart rate. The findings provide evidence of the utility of virtual reality for simulating real situations capable of eliciting craving. We also discuss how CET for smoking cessation can be improved through these virtual tools. Copyright © 2012 Elsevier Ltd. All rights reserved.
Distracting people from sources of discomfort in a simulated aircraft environment.
Lewis, Laura; Patel, Harshada; Cobb, Sue; D'Cruz, Mirabelle; Bues, Matthias; Stefani, Oliver; Grobler, Tredeaux
2016-07-19
Comfort is an important factor in the acceptance of transport systems. In 2010 and 2011, the European Commission (EC) put forward its vision for air travel in the year 2050 which envisaged the use of in-flight virtual reality. This paper addressed the EC vision by investigating the effect of virtual environments on comfort. Research has shown that virtual environments can provide entertaining experiences and can be effective distracters from painful experiences. To determine the extent to which a virtual environment could distract people from sources of discomfort. Experiments which involved inducing discomfort commonly experienced in-flight (e.g. limited space, noise) in order to determine the extent to which viewing a virtual environment could distract people from discomfort. Virtual environments can fully or partially distract people from sources of discomfort, becoming more effective when they are interesting. They are also more effective at distracting people from discomfort caused by restricted space than noise disturbances. Virtual environments have the potential to enhance passenger comfort by providing positive distractions from sources of discomfort. Further research is required to understand more fully the reasons why the effect was stronger for one source of discomfort than the other.
Virtual environment architecture for rapid application development
NASA Technical Reports Server (NTRS)
Grinstein, Georges G.; Southard, David A.; Lee, J. P.
1993-01-01
We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.
Clandestine Message Passing in Virtual Environments
2008-09-01
accessed April 4, 2008). Weir, Laila. “Boring Game? Outsorce It.” (August 24, 2004). http://www.wired.com/ entertainment / music /news/2004/08/ 64638...Multiplayer Online MOVES - Modeling Virtual Environments and Simulation MTV – Music Television NPS - Naval Postgraduate School PAN – Personal Area...Network PSP - PlayStation Portable RPG – Role-playing Game SL - Second Life SVN - Subversion VE – Virtual Environments vMTV – Virtual Music
Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa
2017-01-01
Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis.
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it.
Virtual World Currency Value Fluctuation Prediction System Based on User Sentiment Analysis
Kim, Young Bin; Lee, Sang Hyeok; Kang, Shin Jin; Choi, Myung Jin; Lee, Jung; Kim, Chang Hun
2015-01-01
In this paper, we present a method for predicting the value of virtual currencies used in virtual gaming environments that support multiple users, such as massively multiplayer online role-playing games (MMORPGs). Predicting virtual currency values in a virtual gaming environment has rarely been explored; it is difficult to apply real-world methods for predicting fluctuating currency values or shares to the virtual gaming world on account of differences in domains between the two worlds. To address this issue, we herein predict virtual currency value fluctuations by collecting user opinion data from a virtual community and analyzing user sentiments or emotions from the opinion data. The proposed method is straightforward and applicable to predicting virtual currencies as well as to gaming environments, including MMORPGs. We test the proposed method using large-scale MMORPGs and demonstrate that virtual currencies can be effectively and efficiently predicted with it. PMID:26241496
Springer, Kristen S; George, Steven Z; Robinson, Michael E
2016-08-01
Previous studies have not examined the assessment of chronic low back pain (CLBP) and pain-related anxiety from a fear avoidance model through the use of motion-capture software and virtual human technologies. The aim of this study was to develop and assess the psychometric properties of an interactive, technologically based hierarchy that can be used to assess patients with pain and pain-related anxiety. We enrolled 30 licensed physical therapists and 30 participants with CLBP. Participants rated 21 video clips of a 3-D animated character (avatar) engaging in activities that are typically feared by patients with CLBP. The results of the study indicate that physical therapists found the virtual hierarchy clips acceptable and depicted realistic patient experiences. Most participants with CLBP reported at least 1 video clip as being sufficiently anxiety-provoking for use clinically. Therefore, this study suggests a hierarchy of fears can be created out of 21 virtual patient video clips paving the way for future clinical use in patients with CLBP. This report describes the development of a computer-based virtual patient system for the assessment of back pain-related fear and anxiety. Results show that people with back pain as well as physical therapists found the avatar to be realistic, and the depictions of behavior anxiety- and fear-provoking. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
The effect of viewing a virtual environment through a head-mounted display on balance.
Robert, Maxime T; Ballaz, Laurent; Lemay, Martin
2016-07-01
In the next few years, several head-mounted displays (HMD) will be publicly released making virtual reality more accessible. HMD are expected to be widely popular at home for gaming but also in clinical settings, notably for training and rehabilitation. HMD can be used in both seated and standing positions; however, presently, the impact of HMD on balance remains largely unknown. It is therefore crucial to examine the impact of viewing a virtual environment through a HMD on standing balance. To compare static and dynamic balance in a virtual environment perceived through a HMD and the physical environment. The visual representation of the virtual environment was based on filmed image of the physical environment and was therefore highly similar. This is an observational study in healthy adults. No significant difference was observed between the two environments for static balance. However, dynamic balance was more perturbed in the virtual environment when compared to that of the physical environment. HMD should be used with caution because of its detrimental impact on dynamic balance. Sensorimotor conflict possibly explains the impact of HMD on balance. Copyright © 2016 Elsevier B.V. All rights reserved.
Vision-based navigation in a dynamic environment for virtual human
NASA Astrophysics Data System (ADS)
Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu
2004-06-01
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Establishing a virtual learning environment: a nursing experience.
Wood, Anya; McPhee, Carolyn
2011-11-01
The use of virtual worlds has exploded in popularity, but getting started may not be easy. In this article, the authors, members of the corporate nursing education team at University Health Network, outline their experience with incorporating virtual technology into their learning environment. Over a period of several months, a virtual hospital, including two nursing units, was created in Second Life®, allowing more than 500 nurses to role-play in a safe environment without the fear of making a mistake. This experience has provided valuable insight into the best ways to develop and learn in a virtual environment. The authors discuss the challenges of installing and building the Second Life® platform and provide guidelines for preparing users and suggestions for crafting educational activities. This article provides a starting point for organizations planning to incorporate virtual worlds into their learning environment. Copyright 2011, SLACK Incorporated.
Atkins, A.S.; Stroescu, I.; Spagnola, N.B.; Davis, V.G.; Patterson, T.D.; Narasimhan, M.; Harvey, P.D.; Keefe, R.S.E.
2015-01-01
Clinical trials for primary prevention and early intervention in preclinical AD require measures of functional capacity with improved sensitivity to deficits in healthier, non-demented individuals. To this end, the Virtual Reality Functional Capacity Assessment Tool (VRFCAT) was developed as a direct performance-based assessment of functional capacity that is sensitive to changes in function across multiple populations. Using a realistic virtual reality environment, the VRFCAT assesses a subject's ability to complete instrumental activities associated with a shopping trip. The present investigation represents an initial evaluation of the VRFCAT as a potential co-primary measure of functional capacity in healthy aging and preclinical MCI/AD by examining test-retest reliability and associations with cognitive performance in healthy young and older adults. The VRFCAT was compared and contrasted with the UPSA-2-VIM, a traditional performance-based assessment utilizing physical props. Results demonstrated strong age-related differences in performance on each VRFCAT outcome measure, including total completion time, total errors, and total forced progressions. VRFCAT performance showed strong correlations with cognitive performance across both age groups. VRFCAT Total Time demonstrated good test-retest reliability (ICC=.80 in young adults; ICC=.64 in older adults) and insignificant practice effects, indicating the measure is suitable for repeated testing in healthy populations. Taken together, these results provide preliminary support for the VRFCAT as a potential measure of functionally relevant change in primary prevention and preclinical AD/MCI trials. PMID:26618145
Using the Virtual World of Second Life in Veterinary Medicine: Student and Faculty Perceptions.
Mauldin Pereira, Mary; Artemiou, Elpida; McGonigle, Dee; Conan, Anne; Sithole, Fortune; Yvorchuk-St Jean, Kathleen
2018-01-01
Virtual worlds are emerging technologies that can enhance student learning by encouraging active participation through simulation in immersive environments. At Ross University School of Veterinary Medicine (RUSVM), the virtual world of Second Life was piloted as an educational platform for first-semester students to practice clinical reasoning in a simulated veterinary clinical setting. Under the supervision of one facilitator, four groups of nine students met three times to process a clinical case using Second Life. In addition, three groups of four clinical faculty observed one Second Life meeting. Questionnaires using a 4-point Likert scale (1=strongly disagree to 4=strongly agree) and open-ended questions were used to assess student and clinical faculty perceptions of the Second Life platform. Perception scores of students (M=2.7, SD=0.7) and clinical faculty (M=2.7, SD=0.5) indicate that Second Life provides authentic and realistic learning experiences. In fact, students (M=3.4, SD=0.6) and clinical faculty (M=2.9, SD=1.0) indicate that Second Life should be offered to future students. Moreover, content analyses of open-ended responses from students and faculty support the use of Second Life based on reported advantages indicating that Second Life offers a novel and effective instructional method. Ultimately, results indicate that students and clinical faculty had positive educational experiences using Second Life, suggesting the need for further investigation into its application within the curriculum.
Jones, Jake S.
1999-01-01
An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.
Forecasting the Depletion of Transboundary Groundwater Resources in Hyper-Arid Environments
NASA Astrophysics Data System (ADS)
Mazzoni, A.; Heggy, E.
2014-12-01
The increase in awareness about the overexploitation of transboundary groundwater resources in hyper-arid environments that occurred in the last decades has highlighted the need to better map, monitor and manage these resources. Climate change, economic and population growth are driving forces that put more pressure on these fragile but fundamental resources. The aim of our approach is to address the question of whether or not groundwater resources, especially non-renewable, could serve as "backstop" water resource during water shortage periods that would probably affect the drylands in the upcoming 100 years. The high dependence of arid regions on these resources requires prudent management to be able to preserve their fossil aquifers and exploit them in a more sustainable way. We use the NetLogo environment with the FAO Aquastat Database to evaluate if the actual trends of extraction, consumption and use of non-renewable groundwater resources would remain feasible with the future climate change impacts and the population growth scenarios. The case studies selected are three: the Nubian Sandstone Aquifer System, shared between Egypt, Libya, Sudan and Chad; the North Western Sahara Aquifer System, with Algeria, Tunisia and Libya and the Umm Radhuma Dammam Aquifer, in its central part, shared between Saudi Arabia, Qatar and Bahrain. The reason these three fossil aquifers were selected are manifold. First, they represent properly transboundary non-renewable groundwater resources, with all the implications that derive from this, i.e. the necessity of scientific and socio-political cooperation among riparians, the importance of monitoring the status of shared resources and the need to elaborate a shared management policy. Furthermore, each country is characterized by hyper-arid climatic conditions, which will be exacerbated in the next century by climate change and lead to probable severe water shortage periods. Together with climate change, the rate of population growth will be at unprecedented levels for these areas causing the water demand of these nations to grow largely. Our preliminary simulation results suggest that fossil aquifers cannot be used as a long-term solution for water shortage in hyper-arid environments. Aquifers in the Arabian Peninsula are forecasted to be depleted within decades.
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473
Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.
Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir
2016-01-01
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
Ganier, Franck; Hoareau, Charlotte; Tisseau, Jacques
2014-01-01
Virtual reality opens new opportunities for operator training in complex tasks. It lowers costs and has fewer constraints than traditional training. The ultimate goal of virtual training is to transfer knowledge gained in a virtual environment to an actual real-world setting. This study tested whether a maintenance procedure could be learnt equally well by virtual-environment and conventional training. Forty-two adults were divided into three equally sized groups: virtual training (GVT® [generic virtual training]), conventional training (using a real tank suspension and preparation station) and control (no training). Participants then performed the procedure individually in the real environment. Both training types (conventional and virtual) produced similar levels of performance when the procedure was carried out in real conditions. Performance level for the two trained groups was better in terms of success and time taken to complete the task, time spent consulting job instructions and number of times the instructor provided guidance.
Use hyperspectral remote sensing technique to monitoring pine wood nomatode disease preliminary
NASA Astrophysics Data System (ADS)
Qin, Lin; Wang, Xianghong; Jiang, Jing; Yang, Xianchang; Ke, Daiyan; Li, Hongqun; Wang, Dingyi
2016-10-01
The pine wilt disease is a devastating disease of pine trees. In China, the first discoveries of the pine wilt disease on 1982 at Dr. Sun Yat-sen's Mausoleum in Nanjing. It occurred an area of 77000 hm2 in 2005, More than 1540000 pine trees deaths in the year. Many districts of Chongqing in Three Gorges Reservoir have different degrees of pine wilt disease occurrence. It is a serious threat to the ecological environment of the reservoir area. Use unmanned airship to carry high spectrum remote sensing monitoring technology to develop the study on pine wood nematode disease early diagnosis and early warning and forecasting in this study. The hyper spectral data and the digital orthophoto map data of Fuling District Yongsheng Forestry had been achieved In September 2015. Using digital image processing technology to deal with the digital orthophoto map, the number of disease tree and its distribution is automatic identified. Hyper spectral remote sensing data is processed by the spectrum comparison algorithm, and the number and distribution of disease pine trees are also obtained. Two results are compared, the distribution area of disease pine trees are basically the same, indicating that using low air remote sensing technology to monitor the pine wood nematode distribution is successful. From the results we can see that the hyper spectral data analysis results more accurate and less affected by environmental factors than digital orthophoto map analysis results, and more environment variable can be extracted, so the hyper spectral data study is future development direction.
Focus, locus, and sensus: the three dimensions of virtual experience.
Waterworth, E L; Waterworth, J A
2001-04-01
A model of virtual/physical experience is presented, which provides a three dimensional conceptual space for virtual and augmented reality (VR and AR) comprising the dimensions of focus, locus, and sensus. Focus is most closely related to what is generally termed presence in the VR literature. When in a virtual environment, presence is typically shared between the VR and the physical world. "Breaks in presence" are actually shifts of presence away from the VR and toward the external environment. But we can also have "breaks in presence" when attention moves toward absence--when an observer is not attending to stimuli present in the virtual environment, nor to stimuli present in the surrounding physical environment--when the observer is present in neither the virtual nor the physical world. We thus have two dimensions of presence: focus of attention (between presence and absence) and the locus of attention (the virtual vs. the physical world). A third dimension is the sensus of attention--the level of arousal determining whether the observer is highly conscious or relatively unconscious while interacting with the environment. After expanding on each of these three dimensions of experience in relation to VR, we present a couple of educational examples as illustrations, and also relate our model to a suggested spectrum of evaluation methods for virtual environments.
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Digital Immersive Virtual Environments and Instructional Computing
ERIC Educational Resources Information Center
Blascovich, Jim; Beall, Andrew C.
2010-01-01
This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…
Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.
Han, Dustin T; Suhail, Mohamed; Ragan, Eric D
2018-04-01
Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.
ERIC Educational Resources Information Center
Boyd, Pete; Smith, Caroline; Ilhan Beyaztas, Dilek
2015-01-01
Academic developers need to understand the situated workplaces of the academic tribes they are supporting. This study proposes the use of the expansive--restrictive workplace learning environment continuum as a tool for evaluation of academic workplaces. The tool is critically appraised through its application to the analysis of workplace…
Lino, Caroline A; Shibata, Caroline E R; Barreto-Chaves, Maria Luiza M
2014-03-01
Changes in perinatal environment can lead to physiological, morphological, or metabolic alterations in adult life. It is well known that thyroid hormones (TH) are critical for the development, growth, and maturation of organs and systems. In addition, TH interact with the renin-angiotensin system (RAS), and both play a critical role in adult cardiovascular function. The objective of this study was to evaluate the effect of maternal hyperthyroidism on cardiac RAS components in pups during development. From gestational day nine (GD9), pregnant Wistar rats received thyroxine (T4, 12 mg/l in tap water; Hyper group) or vehicle (control group). Dams and pups were killed on GD18 and GD20. Serum concentrations of triiodothyronine (T3) and T4 were higher in the Hyper group than in the control group dams. Cardiac hypertrophy was observed in Hyper pups on GD20. Cardiac angiotensin-converting enzyme (ACE) activity was significantly lower in Hyper pups on both GD18 and GD20, but there was no difference in Ang I/Ang II levels. Ang II receptors expression was higher in the Hyper pup heart on GD18. Maternal hyperthyroidism is associated with alterations in fetal development and altered pattern of expression in RAS components, which in addition to cardiac hypertrophy observed on GD20 may represent an important predisposing factor to cardiovascular diseases in adult life.
A TCP model for external beam treatment of intermediate-risk prostate cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Sean; Putten, Wil van der
2013-03-15
Purpose: Biological models offer the ability to predict clinical outcomes. The authors describe a model to predict the clinical response of intermediate-risk prostate cancer to external beam radiotherapy for a variety of fractionation regimes. Methods: A fully heterogeneous population averaged tumor control probability model was fit to clinical outcome data for hyper, standard, and hypofractionated treatments. The tumor control probability model was then employed to predict the clinical outcome of extreme hypofractionation regimes, as utilized in stereotactic body radiotherapy. Results: The tumor control probability model achieves an excellent level of fit, R{sup 2} value of 0.93 and a root meanmore » squared error of 1.31%, to the clinical outcome data for hyper, standard, and hypofractionated treatments using realistic values for biological input parameters. Residuals Less-Than-Or-Slanted-Equal-To 1.0% are produced by the tumor control probability model when compared to clinical outcome data for stereotactic body radiotherapy. Conclusions: The authors conclude that this tumor control probability model, used with the optimized radiosensitivity values obtained from the fit, is an appropriate mechanistic model for the analysis and evaluation of external beam RT plans with regard to tumor control for these clinical conditions.« less
Lab4CE: A Remote Laboratory for Computer Education
ERIC Educational Resources Information Center
Broisin, Julien; Venant, Rémi; Vidal, Philippe
2017-01-01
Remote practical activities have been demonstrated to be efficient when learners come to acquire inquiry skills. In computer science education, virtualization technologies are gaining popularity as this technological advance enables instructors to implement realistic practical learning activities, and learners to engage in authentic and…
Jones, J.S.
1999-01-12
An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang
2018-01-25
Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.
Villard, Caroline; Soler, Luc; Gangi, Afshin
2005-08-01
For radiofrequency ablation (RFA) of liver tumors, evaluation of vascular architecture, post-RFA necrosis prediction, and the choice of a suitable needle placement strategy using conventional radiological techniques remain difficult. In an attempt to enhance the safety of RFA, a 3D simulator, treatment planning, and training tool, that simulates the insertion of the needle, the necrosis of the treated area, and proposes an optimal needle placement, has been developed. The 3D scenes are automatically reconstructed from enhanced spiral CT scans. The simulator takes into account the cooling effect of local vessels greater than 3 mm in diameter, making necrosis shapes more realistic. Optimal needle positioning can be automatically generated by the software to produce complete destruction of the tumor, with maximum respect of the healthy liver and of all major structures to avoid. We also studied how the use of virtual reality and haptic devices are valuable to make simulation and training realistic and effective.
Virtual Learning Environment for Interactive Engagement with Advanced Quantum Mechanics
ERIC Educational Resources Information Center
Pedersen, Mads Kock; Skyum, Birk; Heck, Robert; Müller, Romain; Bason, Mark; Lieberoth, Andreas; Sherson, Jacob F.
2016-01-01
A virtual learning environment can engage university students in the learning process in ways that the traditional lectures and lab formats cannot. We present our virtual learning environment "StudentResearcher," which incorporates simulations, multiple-choice quizzes, video lectures, and gamification into a learning path for quantum…
ERIC Educational Resources Information Center
Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali
2016-01-01
In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…
Usability Evaluation of an Adaptive 3D Virtual Learning Environment
ERIC Educational Resources Information Center
Ewais, Ahmed; De Troyer, Olga
2013-01-01
Using 3D virtual environments for educational purposes is becoming attractive because of their rich presentation and interaction capabilities. Furthermore, dynamically adapting the 3D virtual environment to the personal preferences, prior knowledge, skills and competence, learning goals, and the personal or (social) context in which the learning…
Exploring Non-Traditional Learning Methods in Virtual and Real-World Environments
ERIC Educational Resources Information Center
Lukman, Rebeka; Krajnc, Majda
2012-01-01
This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…
Learning Relative Motion Concepts in Immersive and Non-Immersive Virtual Environments
ERIC Educational Resources Information Center
Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria
2013-01-01
The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop…
ERIC Educational Resources Information Center
Katlianik, Ivan
2013-01-01
Enabling distant individuals to assemble in one virtual environment, synchronous distance learning appeals to researchers and practitioners alike because of its unique educational opportunities. One of the vital components of successful synchronous distance learning is interactivity. In virtual environments, interactivity is limited by the…
Nature and origins of virtual environments - A bibliographical essay
NASA Technical Reports Server (NTRS)
Ellis, S. R.
1991-01-01
Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.
Ontological implications of being in immersive virtual environments
NASA Astrophysics Data System (ADS)
Morie, Jacquelyn F.
2008-02-01
The idea of Virtual Reality once conjured up visions of new territories to explore, and expectations of awaiting worlds of wonder. VR has matured to become a practical tool for therapy, medicine and commercial interests, yet artists, in particular, continue to expand the possibilities for the medium. Artistic virtual environments created over the past two decades probe the phenomenological nature of these virtual environments. When we inhabit a fully immersive virtual environment, we have entered into a new form of Being. Not only does our body continue to exist in the real, physical world, we are also embodied within the virtual by means of technology that translates our bodied actions into interactions with the virtual environment. Very few states in human existence allow this bifurcation of our Being, where we can exist simultaneously in two spaces at once, with the possible exception of meta-physical states such as shamanistic trance and out-of-body experiences. This paper discusses the nature of this simultaneous Being, how we enter the virtual space, what forms of persona we can don there, what forms of spaces we can inhabit, and what type of wondrous experiences we can both hope for and expect.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Metabolic adaptation to long term changes in gravity environment
NASA Astrophysics Data System (ADS)
Slenzka, K.; Appel, R.; Rahmann, H.
Biochemical analyses of the brain of Cichlid fish larvae, exposed during their very early development for 7 days to an increased acceleration of 3g (hyper-gravity), revealed a decrease in brain nucleoside diphosphate kinase (NDPK) as well as creatine kinase (BB-CK) activity. Using high performance liquid chromatography (HPLC) the concentrations of adenine nucleotides (AMP, ADP, ATP), phosphocreatine (CP), as well as of nicotineamide adenine dinucleotides (NAD, NADP) were analyzed in the brain of hyper-g exposed larvae vs. 1g controls. A slight reduction in the total adenine nucleotides (TAN) as well as the adenylate energy charge (AEC) was found. In parallel a significant increase in the NAD concentration and a corresponding decrease in NADP concentration occurred in larva's hyper-g brains vs. 1g controls. These results give further evidence for an influence of gravity on cellular level and furthermore contribute to a clarification of the cellular signal-response chain for gravity perception.
NASA Astrophysics Data System (ADS)
Koyama, Yusei; Hayashi, Masao; Tanaka, Masayuki; Kodama, Tadayuki; Shimakawa, Rhythm; Yamamoto, Moegi; Nakata, Fumiaki; Tanaka, Ichi; Suzuki, Tomoko L.; Tadaki, Ken-ichi; Nishizawa, Atsushi J.; Yabe, Kiyoto; Toba, Yoshiki; Lin, Lihwai; Jian, Hung-Yu; Komiyama, Yutaka
2018-01-01
We present the environmental dependence of color, stellar mass, and star formation (SF) activity in Hα-selected galaxies along the large-scale structure at z = 0.4 hosting twin clusters in the DEEP2-3 field, discovered by the Subaru Strategic Program of Hyper Suprime-Cam (HSC SSP). By combining photo-z-selected galaxies and Hα emitters selected with broadband and narrowband (NB) data from the recent data release of HSC SSP (DR1), we confirm that galaxies in higher-density environments or galaxies in cluster central regions show redder colors. We find that there still remains a possible color-density and color-radius correlation even if we restrict the sample to Hα-selected galaxies, probably due to the presence of massive Hα emitters in denser regions. We also find a hint of increased star formation rates (SFR) amongst Hα emitters toward the highest-density environment, again primarily driven by the excess of red/massive Hα emitters in high-density environments, while their specific SFRs do not significantly change with environment. This work demonstrates the power of the HSC SSP NB data for studying SF galaxies across environments in the distant universe.
Dibble, Edward; Zivanovic, Aleksandar; Davies, Brian
2004-01-01
This paper presents the results of several early studies relating to human haptic perception sensitivity when probing a virtual object. A 1 degree of freedom (DoF) rotary haptic system, that was designed and built for this purpose, is also presented. The experiments were to assess the maximum forces applied in a minimally invasive surgery (MIS) procedure, quantify the compliance sensitivity threshold when probing virtual tissue and identify the haptic system loop rate necessary for haptic feedback to feel realistic.
A virtual reality browser for Space Station models
NASA Technical Reports Server (NTRS)
Goldsby, Michael; Pandya, Abhilash; Aldridge, Ann; Maida, James
1993-01-01
The Graphics Analysis Facility at NASA/JSC has created a visualization and learning tool by merging its database of detailed geometric models with a virtual reality system. The system allows an interactive walk-through of models of the Space Station and other structures, providing detailed realistic stereo images. The user can activate audio messages describing the function and connectivity of selected components within his field of view. This paper presents the issues and trade-offs involved in the implementation of the VR system and discusses its suitability for its intended purposes.
The virtual environment display system
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1991-01-01
Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.
Hyper-X: Flight Validation of Hypersonic Airbreathing Technology
NASA Technical Reports Server (NTRS)
Rausch, Vincent L.; McClinton, Charles R.; Crawford, J. Larry
1997-01-01
This paper provides an overview of NASA's focused hypersonic technology program, i.e. the Hyper-X program. This program is designed to move hypersonic, air breathing vehicle technology from the laboratory environment to the flight environment, the last stage preceding prototype development. This paper presents some history leading to the flight test program, research objectives, approach, schedule and status. Substantial experimental data base and concept validation have been completed. The program is concentrating on Mach 7 vehicle development, verification and validation in preparation for wind tunnel testing in 1998 and flight testing in 1999. It is also concentrating on finalization of the Mach 5 and 10 vehicle designs. Detailed evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a data base for validation of design methods once flight test data are available.
Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.
Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E
2007-01-01
This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.
Modeling Environmental Impacts on Cognitive Performance for Artificially Intelligent Entities
2017-06-01
of the agent behavior model is presented in a military-relevant virtual game environment. We then outline a quantitative approach to test the agent...relevant virtual game environment. We then outline a quantitative approach to test the agent behavior model within the virtual environment. Results show...x Game View of Hot Environment Condition Displaying Total “f” Cost for Each Searched Waypoint Node
ERIC Educational Resources Information Center
Lim, Keol; Kim, Mi Hwa
2015-01-01
The use of virtual learning environments (VLEs) has become more common and educators recognized the potential of VLEs as educational environments. The learning community in VLEs can be a mixture of people from all over the world with different cultural backgrounds. However, despite many studies about the use of virtual environments for learning,…
Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.
2017-01-01
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477
Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Isikdag, U.; Zlatanova, S.
2013-08-01
3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.
A Flight Training Simulator for Instructing the Helicopter Autorotation Maneuver (Enhanced Version)
NASA Technical Reports Server (NTRS)
Rogers, Steven P.; Asbury, Charles N.
2000-01-01
Autorotation is a maneuver that permits a safe helicopter landing when the engine loses power. A catastrophe may occur if the pilot's control inputs are incorrect, insufficient, excessive, or poorly timed. Due to the danger involved, full-touchdown autorotations are very rarely practiced. Because in-flight autorotation training is risky, time-consuming, and expensive, the objective of the project was to develop the first helicopter flight simulator expressly designed to train students in this critical maneuver. A central feature of the project was the inclusion of an enhanced version of the Pilot-Rotorcraft Intelligent Symbology Management Simulator (PRISMS), a virtual-reality system developed by Anacapa Sciences and Thought Wave. A task analysis was performed to identify the procedural steps in the autorotation, to inventory the information needed to support student task performance, to identify typical errors, and to structure the simulator's practice environment. The system provides immediate knowledge of results, extensive practice of perceptual-motor skills, part-task training, and augmented cueing in a realistic cockpit environment. Additional work, described in this report, extended the capabilities of the simulator in three areas: 1. Incorporation of visual training aids to assist the student in learning the proper appearance of the visual scene when the maneuver is being properly performed; 2. Introduction of the requirement to land at a particular spot, as opposed to the wide, flat open field initially used, and development of appropriate metrics of success; and 3. Inclusion of wind speed and wind direction settings (and random variability settings) to add a more realistic challenge in "hitting the spot."
Physically-Based Modelling and Real-Time Simulation of Fluids.
NASA Astrophysics Data System (ADS)
Chen, Jim Xiong
1995-01-01
Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.
Methods and systems relating to an augmented virtuality environment
Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J
2014-05-20
Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.
Object Creation and Human Factors Evaluation for Virtual Environments
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1998-01-01
The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.
The Influence of Virtual Learning Environments in Students' Performance
ERIC Educational Resources Information Center
Alves, Paulo; Miranda, Luísa; Morais, Carlos
2017-01-01
This paper focuses mainly on the relation between the use of a virtual learning environment (VLE) and students' performance. Therefore, virtual learning environments are characterised and a study is presented emphasising the frequency of access to a VLE and its relation with the students' performance from a public higher education institution…
Full Immersive Virtual Environment Cave[TM] in Chemistry Education
ERIC Educational Resources Information Center
Limniou, M.; Roberts, D.; Papadopoulos, N.
2008-01-01
By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…
Temporal Issues in the Design of Virtual Learning Environments.
ERIC Educational Resources Information Center
Bergeron, Bryan; Obeid, Jihad
1995-01-01
Describes design methods used to influence user perception of time in virtual learning environments. Examines the use of temporal cues in medical education and clinical competence testing. Finds that user perceptions of time affects user acceptance, ease of use, and the level of realism of a virtual learning environment. Contains 51 references.…
The Doubtful Guest? A Virtual Research Environment for Education
ERIC Educational Resources Information Center
Laterza, Vito; Carmichael, Patrick; Procter, Richard
2007-01-01
In this paper the authors describe a novel "Virtual Research Environment" (VRE) based on the Sakai Virtual Collaboration Environment and designed to support education research. This VRE has been used for the past two years by projects of the UK Economic and Social Research Council's Teaching and Learning Research Programme, 10 of which…
Using Virtual Reality to Help Students with Social Interaction Skills
ERIC Educational Resources Information Center
Beach, Jason; Wendt, Jeremy
2015-01-01
The purpose of this study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity to…
ERIC Educational Resources Information Center
Passarelli, Brasilina
2008-01-01
Introduction: The ToLigado Project--Your School Interactive Newspaper is an interactive virtual learning environment conceived, developed, implemented and supported by researchers at the School of the Future Research Laboratory of the University of Sao Paulo, Brazil. Method: This virtual learning environment aims to motivate trans-disciplinary…
Using SOLO to Evaluate an Educational Virtual Environment in a Technology Education Setting
ERIC Educational Resources Information Center
Padiotis, Ioannis; Mikropoulos, Tassos A.
2010-01-01
The present research investigates the contribution of an interactive educational virtual environment on milk pasteurization to the learning outcomes of 40 students in a technical secondary school using SOLO taxonomy. After the interaction with the virtual environment the majority of the students moved to higher hierarchical levels of understanding…
Virtual Environments Supporting Learning and Communication in Special Needs Education
ERIC Educational Resources Information Center
Cobb, Sue V. G.
2007-01-01
Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…
ERIC Educational Resources Information Center
Akdemir, Ömür; Vural, Ömer F.; Çolakoglu, Özgür M.
2015-01-01
Individuals act different in virtual environment than real life. The primary purpose of this study is to investigate the prospective teachers' likelihood of performing unethical behaviors in the real and virtual environments. Prospective teachers are surveyed online and their perceptions have been collected for various scenarios. Findings revealed…
Science Games and the Development of Scientific Possible Selves
ERIC Educational Resources Information Center
Beier, Margaret E.; Miller, Leslie M.; Wang, Shu
2012-01-01
Serious scientific games, especially those that include a virtual apprenticeship component, provide players with realistic experiences in science. This article discusses how science games can influence learning about science and the development of science-oriented possible selves through repeated practice in professional play and through social…
75 FR 35689 - System Personnel Training Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... using realistic simulations.\\14\\ \\13\\ Id. P 1331. \\14\\ Reliability Standard PER-002-0. 9. In Order No... development process to: (1) Include formal training requirements for reliability coordinators similar to those... simulation technology such as a simulator, virtual technology, or other technology in their emergency...
NASA Technical Reports Server (NTRS)
Roscoe, Stanley N.
1989-01-01
For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.
D Modelling and Mapping for Virtual Exploration of Underwater Archaeology Assets
NASA Astrophysics Data System (ADS)
Liarokapis, F.; Kouřil, P.; Agrafiotis, P.; Demesticha, S.; Chmelík, J.; Skarlatos, D.
2017-02-01
This paper investigates immersive technologies to increase exploration time in an underwater archaeological site, both for the public, as well as, for researchers and scholars. Focus is on the Mazotos shipwreck site in Cyprus, which is located 44 meters underwater. The aim of this work is two-fold: (a) realistic modelling and mapping of the site and (b) an immersive virtual reality visit. For 3D modelling and mapping optical data were used. The underwater exploration is composed of a variety of sea elements including: plants, fish, stones, and artefacts, which are randomly positioned. Users can experience an immersive virtual underwater visit in Mazotos shipwreck site and get some information about the shipwreck and its contents for raising their archaeological knowledge and cultural awareness.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
Simultaneous Deployment and Tracking Multi-Robot Strategies with Connectivity Maintenance
Tardós, Javier; Aragues, Rosario; Sagüés, Carlos; Rubio, Carlos
2018-01-01
Multi-robot teams composed of ground and aerial vehicles have gained attention during the last few years. We present a scenario where both types of robots must monitor the same area from different view points. In this paper, we propose two Lloyd-based tracking strategies to allow the ground robots (agents) to follow the aerial ones (targets), keeping the connectivity between the agents. The first strategy establishes density functions on the environment so that the targets acquire more importance than other zones, while the second one iteratively modifies the virtual limits of the working area depending on the positions of the targets. We consider the connectivity maintenance due to the fact that coverage tasks tend to spread the agents as much as possible, which is addressed by restricting their motions so that they keep the links of a minimum spanning tree of the communication graph. We provide a thorough parametric study of the performance of the proposed strategies under several simulated scenarios. In addition, the methods are implemented and tested using realistic robotic simulation environments and real experiments. PMID:29558446
Automatic Perceptual Color Map Generation for Realistic Volume Visualization
Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor
2008-01-01
Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609
A Virtual Aluminum Reduction Cell
NASA Astrophysics Data System (ADS)
Zhang, Hongliang; Zhou, Chenn Q.; Wu, Bing; Li, Jie
2013-11-01
The most important component in the aluminum industry is the aluminum reduction cell; it has received considerable interests and resources to conduct research to improve its productivity and energy efficiency. The current study focused on the integration of numerical simulation data and virtual reality technology to create a scientifically and practically realistic virtual aluminum reduction cell by presenting complex cell structures and physical-chemical phenomena. The multiphysical field simulation models were first built and solved in ANSYS software (ANSYS Inc., Canonsburg, PA, USA). Then, the methodology of combining the simulation results with virtual reality was introduced, and a virtual aluminum reduction cell was created. The demonstration showed that a computer-based world could be created in which people who are not analysis experts can see the detailed cell structure in a context that they can understand easily. With the application of the virtual aluminum reduction cell, even people who are familiar with aluminum reduction cell operations can gain insights that make it possible to understand the root causes of observed problems and plan design changes in much less time.
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Virtual environments for scene of crime reconstruction and analysis
NASA Astrophysics Data System (ADS)
Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon
2000-02-01
This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.
Emerging CAE technologies and their role in Future Ambient Intelligence Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2011-03-01
Dramatic improvements are on the horizon in Computer Aided Engineering (CAE) and various simulation technologies. The improvements are due, in part, to the developments in a number of leading-edge technologies and their synergistic combinations/convergence. The technologies include ubiquitous, cloud, and petascale computing; ultra high-bandwidth networks, pervasive wireless communication; knowledge based engineering; networked immersive virtual environments and virtual worlds; novel human-computer interfaces; and powerful game engines and facilities. This paper describes the frontiers and emerging simulation technologies, and their role in the future virtual product creation and learning/training environments. The environments will be ambient intelligence environments, incorporating a synergistic combination of novel agent-supported visual simulations (with cognitive learning and understanding abilities); immersive 3D virtual world facilities; development chain management systems and facilities (incorporating a synergistic combination of intelligent engineering and management tools); nontraditional methods; intelligent, multimodal and human-like interfaces; and mobile wireless devices. The Virtual product creation environment will significantly enhance the productivity and will stimulate creativity and innovation in future global virtual collaborative enterprises. The facilities in the learning/training environment will provide timely, engaging, personalized/collaborative and tailored visual learning.
The Problem Patron and the Academic Library Web Site as Virtual Reference Desk.
ERIC Educational Resources Information Center
Taylor, Daniel; Porter, George S.
2002-01-01
Considers problem library patrons in a virtual environment based on experiences at California Institute of Technology's Web site and its use for virtual reference. Discusses the virtual reference desk concept; global visibility and access to the World Wide Web; problematic email; and advantages in the electronic environment. (LRW)
NASA Astrophysics Data System (ADS)
Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.
Brundage, Shelley B; Brinton, James M; Hancock, Adrienne B
2016-12-01
Virtual reality environments (VREs) allow for immersion in speaking environments that mimic real-life interactions while maintaining researcher control. VREs have been used successfully to engender arousal in other disorders. The purpose of this study was to investigate the utility of virtual reality environments to examine physiological reactivity and subjective ratings of distress in persons who stutter (PWS). Subjective and objective measures of arousal were collected from 10PWS during four-minute speeches to a virtual audience and to a virtual empty room. Stuttering frequency and physiological measures (skin conductance level and heart rate) did not differ across speaking conditions, but subjective ratings of distress were significantly higher in the virtual audience condition compared to the virtual empty room. VREs have utility in elevating subjective ratings of distress in PWS. VREs have the potential to be useful tools for practicing treatment targets in a safe, controlled, and systematic manner. Copyright © 2016 Elsevier Inc. All rights reserved.
A Proposed Framework for Collaborative Design in a Virtual Environment
NASA Astrophysics Data System (ADS)
Breland, Jason S.; Shiratuddin, Mohd Fairuz
This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.
Toet, Alexander; van Schaik, Martin; Theunissen, Nicolet C. M.
2013-01-01
Background Desktop virtual environments (VEs) are increasingly deployed to study the effects of environmental qualities and interventions on human behavior and safety related concerns in built environments. For these applications it is essential that users appraise the affective qualities of the VE similar to those of its real world counterpart. Previous studies have shown that factors like simulated lighting, sound and dynamic elements all contribute to the affective appraisal of a desktop VE. Since ambient odor is known to affect the affective appraisal of real environments, and has been shown to increase the sense of presence in immersive VEs, it may also be an effective tool to tune the affective appraisal of desktop VEs. This study investigated if exposure to ambient odor can modulate the affective appraisal of a desktop VE with signs of public disorder. Method Participants explored a desktop VE representing a suburban neighborhood with signs of public disorder (neglect, vandalism and crime), while being exposed to either room air or subliminal levels of unpleasant (tar) or pleasant (cut grass) ambient odor. Whenever they encountered signs of disorder they reported their safety related concerns and associated affective feelings. Results Signs of crime in the desktop VE were associated with negative affective feelings and concerns for personal safety and personal property. However, there was no significant difference between reported safety related concerns and affective connotations in the control (no-odor) and in each of the two ambient odor conditions. Conclusion Ambient odor did not affect safety related concerns and affective connotations associated with signs of disorder in the desktop VE. Thus, semantic congruency between ambient odor and a desktop VE may not be sufficient to influence its affective appraisal, and a more realistic simulation in which simulated objects appear to emit scents may be required to achieve this goal. PMID:24250810
Toet, Alexander; van Schaik, Martin; Theunissen, Nicolet C M
2013-01-01
Desktop virtual environments (VEs) are increasingly deployed to study the effects of environmental qualities and interventions on human behavior and safety related concerns in built environments. For these applications it is essential that users appraise the affective qualities of the VE similar to those of its real world counterpart. Previous studies have shown that factors like simulated lighting, sound and dynamic elements all contribute to the affective appraisal of a desktop VE. Since ambient odor is known to affect the affective appraisal of real environments, and has been shown to increase the sense of presence in immersive VEs, it may also be an effective tool to tune the affective appraisal of desktop VEs. This study investigated if exposure to ambient odor can modulate the affective appraisal of a desktop VE with signs of public disorder. Participants explored a desktop VE representing a suburban neighborhood with signs of public disorder (neglect, vandalism and crime), while being exposed to either room air or subliminal levels of unpleasant (tar) or pleasant (cut grass) ambient odor. Whenever they encountered signs of disorder they reported their safety related concerns and associated affective feelings. Signs of crime in the desktop VE were associated with negative affective feelings and concerns for personal safety and personal property. However, there was no significant difference between reported safety related concerns and affective connotations in the control (no-odor) and in each of the two ambient odor conditions. Ambient odor did not affect safety related concerns and affective connotations associated with signs of disorder in the desktop VE. Thus, semantic congruency between ambient odor and a desktop VE may not be sufficient to influence its affective appraisal, and a more realistic simulation in which simulated objects appear to emit scents may be required to achieve this goal.
Information Virtulization in Virtual Environments
NASA Technical Reports Server (NTRS)
Bryson, Steve; Kwak, Dochan (Technical Monitor)
2001-01-01
Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.
ERIC Educational Resources Information Center
Hirsh, Sandra; Simmons, Michelle Holschuh; Christensen, Paul; Sellar, Melanie; Stenström, Cheryl; Hagar, Christine; Bernier, Anthony; Faires, Debbie; Fisher, Jane; Alman, Susan
2015-01-01
The IFLA Trend Report identified five trends that will impact the information environment (IFLA, 2015), such as access to information with new technologies, online education for global learning, hyper-connected communities, and the global information environment. The faculty at San José State University (SJSU) School of Information (iSchool) is…
Strategic Plans in Higher Education: Planning to Survive and Prosper in the New Economy
ERIC Educational Resources Information Center
Luu, Hung Nguyen Quoc
2006-01-01
In the era of globalization, many schools have recognized strategic planning as the key factor to enhance their organizational performance. To survive and prosper in this hyper-competitive environment, institutional leaders are to implement strategic planning to help match all activities of the school to its environment and to its resource…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiarashi, Nooshin; Nolte, Adam C.; Sturgeon, Gregory M.
Purpose: Physical phantoms are essential for the development, optimization, and evaluation of x-ray breast imaging systems. Recognizing the major effect of anatomy on image quality and clinical performance, such phantoms should ideally reflect the three-dimensional structure of the human breast. Currently, there is no commercially available three-dimensional physical breast phantom that is anthropomorphic. The authors present the development of a new suite of physical breast phantoms based on human data. Methods: The phantoms were designed to match the extended cardiac-torso virtual breast phantoms that were based on dedicated breast computed tomography images of human subjects. The phantoms were fabricated bymore » high-resolution multimaterial additive manufacturing (3D printing) technology. The glandular equivalency of the photopolymer materials was measured relative to breast tissue-equivalent plastic materials. Based on the current state-of-the-art in the technology and available materials, two variations were fabricated. The first was a dual-material phantom, the Doublet. Fibroglandular tissue and skin were represented by the most radiographically dense material available; adipose tissue was represented by the least radiographically dense material. The second variation, the Singlet, was fabricated with a single material to represent fibroglandular tissue and skin. It was subsequently filled with adipose-equivalent materials including oil, beeswax, and permanent urethane-based polymer. Simulated microcalcification clusters were further included in the phantoms via crushed eggshells. The phantoms were imaged and characterized visually and quantitatively. Results: The mammographic projections and tomosynthesis reconstructed images of the fabricated phantoms yielded realistic breast background. The mammograms of the phantoms demonstrated close correlation with simulated mammographic projection images of the corresponding virtual phantoms. Furthermore, power-law descriptions of the phantom images were in general agreement with real human images. The Singlet approach offered more realistic contrast as compared to the Doublet approach, but at the expense of air bubbles and air pockets that formed during the filling process. Conclusions: The presented physical breast phantoms and their matching virtual breast phantoms offer realistic breast anatomy, patient variability, and ease of use, making them a potential candidate for performing both system quality control testing and virtual clinical trials.« less
Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung
2017-09-22
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.