Sample records for real-time interactive virtual

  1. LVC interaction within a mixed-reality training system

    NASA Astrophysics Data System (ADS)

    Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio

    2012-03-01

    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.

  2. Generalized interactions using virtual tools within the spring framework: probing, piercing, cauterizing and ablating

    NASA Technical Reports Server (NTRS)

    Montgomery, Kevin; Bruyns, Cynthia D.

    2002-01-01

    We present schemes for real-time generalized interactions such as probing, piercing, cauterizing and ablating virtual tissues. These methods have been implemented in a robust, real-time (haptic rate) surgical simulation environment allowing us to model procedures including animal dissection, microsurgery, hysteroscopy, and cleft lip repair.

  3. Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.

    PubMed

    Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L

    2011-03-01

    Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.

  4. The VIRI (Virtual, Interactive, Real-Time, Instructor-Led) Classroom: The Impact of Blended Synchronous Online Courses on Student Performance, Engagement, and Satisfaction

    ERIC Educational Resources Information Center

    Francescucci, Anthony; Foster, Mary

    2013-01-01

    Previous research on blended course offerings focuses on the addition of asynchronous online content to an existing course. While some explore synchronous communication, few control for differences between treatment groups. This study investigates the impact of teaching a blended course, using a virtual, interactive, real-time, instructor-led…

  5. Phase Transition of a Dynamical System with a Bi-Directional, Instantaneous Coupling to a Virtual System

    NASA Astrophysics Data System (ADS)

    Gintautas, Vadas; Hubler, Alfred

    2006-03-01

    As worldwide computer resources increase in power and decrease in cost, real-time simulations of physical systems are becoming increasingly prevalent, from laboratory models to stock market projections and entire ``virtual worlds'' in computer games. Often, these systems are meticulously designed to match real-world systems as closely as possible. We study the limiting behavior of a virtual horizontally driven pendulum coupled to its real-world counterpart, where the interaction occurs on a time scale that is much shorter than the time scale of the dynamical system. We find that if the physical parameters of the virtual system match those of the real system within a certain tolerance, there is a qualitative change in the behavior of the two-pendulum system as the strength of the coupling is increased. Applications include a new method to measure the physical parameters of a real system and the use of resonance spectroscopy to refine a computer model. As virtual systems better approximate real ones, even very weak interactions may produce unexpected and dramatic behavior. The research is supported by the National Science Foundation Grant No. NSF PHY 01-40179, NSF DMS 03-25939 ITR, and NSF DGE 03-38215.

  6. Shared virtual environments for telerehabilitation.

    PubMed

    Popescu, George V; Burdea, Grigore; Boian, Rares

    2002-01-01

    Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.

  7. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  8. Computer-Assisted Culture Learning in an Online Augmented Reality Environment Based on Free-Hand Gesture Interaction

    ERIC Educational Resources Information Center

    Yang, Mau-Tsuen; Liao, Wan-Che

    2014-01-01

    The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…

  9. Innovative application of virtual display technique in virtual museum

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  10. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  11. CPU-GPU mixed implementation of virtual node method for real-time interactive cutting of deformable objects using OpenCL.

    PubMed

    Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan

    2015-09-01

    Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real-time performance and 100-200 Hz simulation rate are achieved for the liver model with 3,101 tetrahedrons. Major bottlenecks for simulation efficiency are cutting, collision processing and CPU-GPU data transfer. Future work needs to improve on these areas.

  12. Direct manipulation of virtual objects

    NASA Astrophysics Data System (ADS)

    Nguyen, Long K.

    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.

  13. Distribution Locational Real-Time Pricing Based Smart Building Control and Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen

    This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less

  14. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  15. Real-time functional magnetic imaging-brain-computer interface and virtual reality promising tools for the treatment of pedophilia.

    PubMed

    Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels

    2011-01-01

    This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Tangible display systems: bringing virtual surfaces into the real world

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2012-03-01

    We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.

  17. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  18. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  19. Realistic Real-Time Outdoor Rendering in Augmented Reality

    PubMed Central

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  20. Realistic real-time outdoor rendering in augmented reality.

    PubMed

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  1. Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.

    PubMed

    Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico

    2017-01-01

    Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.

  2. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.

  3. A virtual reality-based system integrated with fmri to study neural mechanisms of action observation-execution: A proof of concept study

    PubMed Central

    Adamovich, S.V.; August, K.; Merians, A.; Tunik, E.

    2017-01-01

    Purpose Emerging evidence shows that interactive virtual environments (VEs) may be a promising tool for studying sensorimotor processes and for rehabilitation. However, the potential of VEs to recruit action observation-execution neural networks is largely unknown. For the first time, a functional MRI-compatible virtual reality system (VR) has been developed to provide a window into studying brain-behavior interactions. This system is capable of measuring the complex span of hand-finger movements and simultaneously streaming this kinematic data to control the motion of representations of human hands in virtual reality. Methods In a blocked fMRI design, thirteen healthy subjects observed, with the intent to imitate (OTI), finger sequences performed by the virtual hand avatar seen in 1st person perspective and animated by pre-recorded kinematic data. Following this, subjects imitated the observed sequence while viewing the virtual hand avatar animated by their own movement in real-time. These blocks were interleaved with rest periods during which subjects viewed static virtual hand avatars and control trials in which the avatars were replaced with moving non-anthropomorphic objects. Results We show three main findings. First, both observation with intent to imitate and imitation with real-time virtual avatar feedback, were associated with activation in a distributed frontoparietal network typically recruited for observation and execution of real-world actions. Second, we noted a time-variant increase in activation in the left insular cortex for observation with intent to imitate actions performed by the virtual avatar. Third, imitation with virtual avatar feedback (relative to the control condition) was associated with a localized recruitment of the angular gyrus, precuneus, and extrastriate body area, regions which are (along with insular cortex) associated with the sense of agency. Conclusions Our data suggest that the virtual hand avatars may have served as disembodied training tools in the observation condition and as embodied “extensions” of the subject’s own body (pseudo-tools) in the imitation. These data advance our understanding of the brain-behavior interactions when performing actions in VE and have implications in the development of observation- and imitation-based VR rehabilitation paradigms. PMID:19531876

  4. Real Time Computer Graphics From Body Motion

    NASA Astrophysics Data System (ADS)

    Fisher, Scott; Marion, Ann

    1983-10-01

    This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.

  5. Real-time, interactive, visually updated simulator system for telepresence

    NASA Technical Reports Server (NTRS)

    Schebor, Frederick S.; Turney, Jerry L.; Marzwell, Neville I.

    1991-01-01

    Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration.

  6. Interactions with Virtual People: Do Avatars Dream of Digital Sheep?. Chapter 6

    NASA Technical Reports Server (NTRS)

    Slater, Mel; Sanchez-Vives, Maria V.

    2007-01-01

    This paper explores another form of artificial entity, ones without physical embodiment. We refer to virtual characters as the name for a type of interactive object that have become familiar in computer games and within virtual reality applications. We refer to these as avatars: three-dimensional graphical objects that are in more-or-less human form which can interact with humans. Sometimes such avatars will be representations of real-humans who are interacting together within a shared networked virtual environment, other times the representations will be of entirely computer generated characters. Unlike other authors, who reserve the term agent for entirely computer generated characters and avatars for virtual embodiments of real people; the same term here is used for both. This is because avatars and agents are on a continuum. The question is where does their behaviour originate? At the extremes the behaviour is either completely computer generated or comes only from tracking of a real person. However, not every aspect of a real person can be tracked every eyebrow move, every blink, every breath rather real tracking data would be supplemented by inferred behaviours which are programmed based on the available information as to what the real human is doing and her/his underlying emotional and psychological state. Hence there is always some programmed behaviour it is only a matter of how much. In any case the same underlying problem remains how can the human character be portrayed in such a manner that its actions are believable and have an impact on the real people with whom it interacts? This paper has three main parts. In the first part we will review some evidence that suggests that humans react with appropriate affect in their interactions with virtual human characters, or with other humans who are represented as avatars. This is so in spite of the fact that the representational fidelity is relatively low. Our evidence will be from the realm of psychotherapy, where virtual social situations are created that do test whether people react appropriately within these situations. We will also consider some experiments on face-to-face virtual communications between people in the same shared virtual environments. The second part will try to give some clues about why this might happen, taking into account modern theories of perception from neuroscience. The third part will include some speculations about the future developments of the relationship between people and virtual people. We will suggest that a more likely scenario than the world becoming populated by physically embodied virtual people (robots, androids) is that in the relatively near future we will interact more and more in our everyday lives with virtual people- bank managers, shop assistants, instructors, and so on. What is happening in the movies with computer graphic generated individuals and entire crowds may move into the space of everyday life.

  7. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  8. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.

  9. The human dynamic clamp as a paradigm for social interaction.

    PubMed

    Dumas, Guillaume; de Guzman, Gonzalo C; Tognoli, Emmanuelle; Kelso, J A Scott

    2014-09-02

    Social neuroscience has called for new experimental paradigms aimed toward real-time interactions. A distinctive feature of interactions is mutual information exchange: One member of a pair changes in response to the other while simultaneously producing actions that alter the other. Combining mathematical and neurophysiological methods, we introduce a paradigm called the human dynamic clamp (HDC), to directly manipulate the interaction or coupling between a human and a surrogate constructed to behave like a human. Inspired by the dynamic clamp used so productively in cellular neuroscience, the HDC allows a person to interact in real time with a virtual partner itself driven by well-established models of coordination dynamics. People coordinate hand movements with the visually observed movements of a virtual hand, the parameters of which depend on input from the subject's own movements. We demonstrate that HDC can be extended to cover a broad repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by a virtual "teacher." We propose HDC as a general paradigm, best implemented when empirically verified theoretical or mathematical models have been developed in a particular scientific field. The HDC paradigm is powerful because it provides an opportunity to explore parameter ranges and perturbations that are not easily accessible in ordinary human interactions. The HDC not only enables to test the veracity of theoretical models, it also illuminates features that are not always apparent in real-time human social interactions and the brain correlates thereof.

  10. Advanced Technology for Portable Personal Visualization.

    DTIC Science & Technology

    1992-06-01

    interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the

  11. Virtual Teams and International Business Teaching and Learning: The Case of the Global Enterprise Experience (GEE)

    ERIC Educational Resources Information Center

    Gonzalez-Perez, Maria Alejandra; Velez-Calle, Andres; Cathro, Virginia; Caprar, Dan V.; Taras, Vasyl

    2014-01-01

    The increasing importance of global virtual teams in business is reflected in the classroom by the increased adoption of activities that facilitate real-time cross-cultural interaction. This article documents the experience of students from two Colombian universities who participated in a collaborative international project using virtual teams as…

  12. Virtual reality welder training

    NASA Astrophysics Data System (ADS)

    White, Steven A.; Reiners, Dirk; Prachyabrued, Mores; Borst, Christoph W.; Chambers, Terrence L.

    2010-01-01

    This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.

  13. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  14. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  15. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  16. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  17. Emerging Conceptual Understanding of Complex Astronomical Phenomena by Using a Virtual Solar System

    ERIC Educational Resources Information Center

    Gazit, Elhanan; Yair, Yoav; Chen, David

    2005-01-01

    This study describes high school students' conceptual development of the basic astronomical phenomena during real-time interactions with a Virtual Solar System (VSS). The VSS is a non-immersive virtual environment which has a dynamic frame of reference that can be altered by the user. Ten 10th grade students were given tasks containing a set of…

  18. The human dynamic clamp as a paradigm for social interaction

    PubMed Central

    Dumas, Guillaume; de Guzman, Gonzalo C.; Tognoli, Emmanuelle; Kelso, J. A. Scott

    2014-01-01

    Social neuroscience has called for new experimental paradigms aimed toward real-time interactions. A distinctive feature of interactions is mutual information exchange: One member of a pair changes in response to the other while simultaneously producing actions that alter the other. Combining mathematical and neurophysiological methods, we introduce a paradigm called the human dynamic clamp (HDC), to directly manipulate the interaction or coupling between a human and a surrogate constructed to behave like a human. Inspired by the dynamic clamp used so productively in cellular neuroscience, the HDC allows a person to interact in real time with a virtual partner itself driven by well-established models of coordination dynamics. People coordinate hand movements with the visually observed movements of a virtual hand, the parameters of which depend on input from the subject’s own movements. We demonstrate that HDC can be extended to cover a broad repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by a virtual “teacher.” We propose HDC as a general paradigm, best implemented when empirically verified theoretical or mathematical models have been developed in a particular scientific field. The HDC paradigm is powerful because it provides an opportunity to explore parameter ranges and perturbations that are not easily accessible in ordinary human interactions. The HDC not only enables to test the veracity of theoretical models, it also illuminates features that are not always apparent in real-time human social interactions and the brain correlates thereof. PMID:25114256

  19. Hybrid Reality Lab Capabilities - Video 2

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.

  20. EduMOOs: Virtual Learning Centers.

    ERIC Educational Resources Information Center

    Woods, Judy C.

    1998-01-01

    Multi-user Object Oriented Internet activities (MOOs) permit real time interaction in a text-based virtual reality via the Internet. This article explains EduMOOs (educational MOOs) and provides brief descriptions, World Wide Web addresses, and telnet addresses for selected EduMOOs. Instructions for connecting to a MOO and a list of related Web…

  1. Utilization of virtual reality for endotracheal intubation training.

    PubMed

    Mayrose, James; Kesavadas, T; Chugh, Kevin; Joshi, Dhananjay; Ellis, David G

    2003-10-01

    Tracheal intubation is performed for urgent airway control in injured patients. Current methods of training include working on cadavers and manikins, which lack the realism of a living human being. Work in this field has been limited due to the complex nature of simulating in real-time, the interactive forces and deformations which occur during an actual patient intubation. This study addressed the issue of intubation training in an attempt to bridge the gap between actual and virtual patient scenarios. The haptic device along with the real-time performance of the simulator give it both visual and physical realism. The three-dimensional viewing and interaction available through virtual reality make it possible for physicians, pre-hospital personnel and students to practice many endotracheal intubations without ever touching a patient. The ability for a medical professional to practice a procedure multiple times prior to performing it on a patient will both enhance the skill of the individual while reducing the risk to the patient.

  2. Multiresolution Algorithms for Processing Giga-Models: Real-time Visualization, Reasoning, and Interaction

    DTIC Science & Technology

    2012-04-23

    Interactive Virtual Hair Salon , Presence, (05 2007): 237. doi: 2012/04/17 12:55:26 31 Theodore Kim, Jason Sewall, Avneesh Sud, Ming Lin. Fast...in Games , Utrecht, Netherlands, Nov. 2009. Keynote Speaker, IADIS International Conference on Computer Graphics and Visualization, Portugal, June 2009...Keynote Speaker, ACM Symposium on Virtual Reality Software and Technology, Bordeaux, France, October 2008. Invited Speaker, Motion in Games , Utrecht

  3. The Virtual Schoolhouse.

    ERIC Educational Resources Information Center

    Leddo, John; Kolodziej, James

    Significant changes in military training are resulting from pressures to cut costs and move training from the schoolhouse to the field so it can be delivered "just in time" and be more responsive to individual unit training needs. Distributed Interactive Simulation (DIS) allows multiple trainees to interact in real time on a common…

  4. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  5. Design of a Gaze-Sensitive Virtual Social Interactive System for Children With Autism

    PubMed Central

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child’s dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. PMID:21609889

  6. Design of a gaze-sensitive virtual social interactive system for children with autism.

    PubMed

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2011-08-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child's dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. © 2011 IEEE

  7. Virtual endoscopy using spherical QuickTime-VR panorama views.

    PubMed

    Tiede, Ulf; von Sternberg-Gospos, Norman; Steiner, Paul; Höhne, Karl Heinz

    2002-01-01

    Virtual endoscopy needs some precomputation of the data (segmentation, path finding) before the diagnostic process can take place. We propose a method that precomputes multinode spherical panorama movies using Quick-Time VR. This technique allows almost the same navigation and visualization capabilities as a real endoscopic procedure, a significant reduction of interaction input is achieved and the movie represents a document of the procedure.

  8. Guiding Exploration through Three-Dimensional Virtual Environments: A Cognitive Load Reduction Approach

    ERIC Educational Resources Information Center

    Chen, Chwen Jen; Fauzy Wan Ismail, Wan Mohd

    2008-01-01

    The real-time interactive nature of three-dimensional virtual environments (VEs) makes this technology very appropriate for exploratory learning purposes. However, many studies have shown that the exploration process may cause cognitive overload that affects the learning of domain knowledge. This article reports a quasi-experimental study that…

  9. "The Virtual Patient"--Development, Implementation and Evaluation of an Innovative Computer Simulation for Postgraduate Nursing Students

    ERIC Educational Resources Information Center

    Kiegaldie, Debra; White, Geoff

    2006-01-01

    The Virtual Patient, an interactive multimedia learning resource using a critical care clinical scenario for postgraduate nursing students, was developed to enhance flexible access to learning experiences and improve learning outcomes in the management of critically ill patients. Using real-time physiological animations, authentic content design…

  10. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  11. Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment

    ERIC Educational Resources Information Center

    He, Aiguo

    2011-01-01

    Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…

  12. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  13. Virtual interactive presence for real-time, long-distance surgical collaboration during complex microsurgical procedures.

    PubMed

    Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A

    2014-08-01

    The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.

  14. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    NASA Technical Reports Server (NTRS)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  15. A Nationwide Experimental Multi-Gigabit Network

    DTIC Science & Technology

    2003-03-01

    television and cinema , and to real- time interactive teleconferencing. There is another variable which affects this happy growth in network bandwidth and...render large scientific data sets with interactive frame rates on the desktop or in an immersive virtual reality ( VR ) environment. In our design, we

  16. The Design and Implementation of Virtual Roaming in Yunnan Diqing Tibetan traditional Villages

    NASA Astrophysics Data System (ADS)

    Cao, Lucheng; Xu, Wu; Li, Ke; Jin, Chunjie; Su, Ying; He, Jin

    2018-06-01

    Traditional residence is the continuation of intangible cultural heritage and the primitive soil for development. At present, the protection and inheritance of traditional villages have been impacted by the process of modernization, and the phenomenon of assimilation is very serious. This article takes the above questions as the breakthrough point, and then analyzes why and how to use virtual reality technology to better solve the above problems, and take the Yunnan Diqing Tibetan traditional dwellings as the specific example to explore. First, using VR technology, with real images and sound, the paper simulate a near real virtual world. Secondly, we collect a large amount of real image information, and make the visualization model of building by using 3DMAX software platform, UV Mapping and Rendering optimization. Finally, the Vizard virtual reality development platform was used to establish the roaming system and realize the virtual interaction. The roaming system was posted online so that overcome the disadvantages of not intuitive and low capability of interaction, and these new ideas can give a whole new meaning in the protection projects of the cultural relic buildings. At the same time, visitors could enjoy the "Dian-style" architectural style and cultural connotation of dwelling house in Diqing Yunnan.

  17. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    PubMed

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  18. Shared virtual environments for aerospace training

    NASA Technical Reports Server (NTRS)

    Loftin, R. Bowen; Voss, Mark

    1994-01-01

    Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.

  19. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    NASA Astrophysics Data System (ADS)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  20. Modeling and performance analysis using extended fuzzy-timing Petri nets for networked virtual environments.

    PubMed

    Zhou, Y; Murata, T; Defanti, T A

    2000-01-01

    Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.

  1. The Development of Interactive Distance Learning in Taiwan: Challenges and Prospects.

    ERIC Educational Resources Information Center

    Chu, Clarence T.

    1999-01-01

    Describes three types of interactive distance-education systems under development in Taiwan: real-time multicast systems; virtual-classroom systems; and curriculum-on-demand systems. Discusses the use of telecommunications and computer technology in higher education, problems and challenges, and future prospects. (Author/LRW)

  2. An interactive physics-based unmanned ground vehicle simulator leveraging open source gaming technology: progress in the development and application of the virtual autonomous navigation environment (VANE) desktop

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Crawford, Justin; Toschlog, Matthew; Iagnemma, Karl D.; Kewlani, Guarav; Cummins, Christopher L.; Jones, Randolph A.; Horner, David A.

    2009-05-01

    It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles (UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing (HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE HPC research is a real-time desktop simulation application under development by the authors that provides a portal into the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations. ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf (COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several initial applications of the system.

  3. A computer-based training system combining virtual reality and multimedia

    NASA Technical Reports Server (NTRS)

    Stansfield, Sharon A.

    1993-01-01

    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.

  4. Virtual Social Environments as a Tool for Psychological Assessment: Dynamics of Interaction with a Virtual Spouse

    ERIC Educational Resources Information Center

    Schonbrodt, Felix D.; Asendorpf, Jens B.

    2011-01-01

    Computer games are advocated as a promising tool bridging the gap between the controllability of a lab experiment and the mundane realism of a field experiment. At the same time, many authors stress the importance of observing real behavior instead of asking participants about possible or intended behaviors. In this article, the authors introduce…

  5. Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.

  6. Time Series Data Visualization in World Wide Telescope

    NASA Astrophysics Data System (ADS)

    Fay, J.

    WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.

  7. Synchronous Writing Environments: Real-Time Interaction in Cyberspace (Technology Tidbits).

    ERIC Educational Resources Information Center

    Anderson-Inman, Lynne; And Others

    1996-01-01

    Discusses three types of synchronous writing environments, each offering teachers and students a vehicle for using electronic text to promote literacy-based learning communities: classroom collaboration, networked notetaking, and virtual communities. (SR)

  8. Collaborative voxel-based surgical virtual environments.

    PubMed

    Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan

    2008-01-01

    Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.

  9. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  10. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  11. Virtual faces expressing emotions: an initial concomitant and construct validity study.

    PubMed

    Joyal, Christian C; Jacob, Laurence; Cigna, Marie-Hélène; Guay, Jean-Pierre; Renaud, Patrice

    2014-01-01

    Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. The goal of this study was to initially assess concomitants and construct validity of a newly developed set of virtual faces expressing six fundamental emotions (happiness, surprise, anger, sadness, fear, and disgust). Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles), and regional gaze fixation latencies (eyes and mouth regions) were compared in 41 adult volunteers (20 ♂, 21 ♀) during the presentation of video clips depicting real vs. virtual adults expressing emotions. Emotions expressed by each set of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feedforward interactions based on facial emotion expressions can also be conducted with these stimuli.

  12. The RoboCup Mixed Reality League - A Case Study

    NASA Astrophysics Data System (ADS)

    Gerndt, Reinhard; Bohnen, Matthias; da Silva Guerra, Rodrigo; Asada, Minoru

    In typical mixed reality systems there is only a one-way interaction from real to virtual. A human user or the physics of a real object may influence the behavior of virtual objects, but real objects usually cannot be influenced by the virtual world. By introducing real robots into the mixed reality system, we allow a true two-way interaction between virtual and real worlds. Our system has been used since 2007 to implement the RoboCup mixed reality soccer games and other applications for research and edutainment. Our framework system is freely programmable to generate any virtual environment, which may then be further supplemented with virtual and real objects. The system allows for control of any real object based on differential drive robots. The robots may be adapted for different applications, e.g., with markers for identification or with covers to change shape and appearance. They may also be “equipped” with virtual tools. In this chapter we present the hardware and software architecture of our system and some applications. The authors believe this can be seen as a first implementation of Ivan Sutherland’s 1965 idea of the ultimate display: “The ultimate display would, of course, be a room within which the computer can control the existence of matter …” (Sutherland, 1965, Proceedings of IFIPS Congress 2:506-508).

  13. Augmenting the thermal flux experiment: A mixed reality approach with the HoloLens

    NASA Astrophysics Data System (ADS)

    Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.

    2017-09-01

    In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted displays, which allow one to embed virtual objects into the real surroundings, leading to a Mixed Reality (MR) experience. In such an environment, digital and real objects do not only coexist, but moreover are also able to interact with each other in real time. These concepts can be used to merge human perception of reality with digitally visualized sensor data, thereby making the invisible visible. As a first example, in this paper we introduce alongside the basic idea of this column an MR experiment in thermodynamics for a laboratory course for freshman students in physics or other science and engineering subjects that uses physical data from mobile devices for analyzing and displaying physical phenomena to students.

  14. Interaction Management Strategies on IRC and Virtual Chat Rooms.

    ERIC Educational Resources Information Center

    Altun, Arif

    Internet Relay Chat (IRC) is an electronic medium that combines orthographic form with real time, synchronous transmission in an unregulated global multi-user environment. The orthographic letters mediate the interaction in that users can only access the IRC session through reading and writing; they have no access to any visual representations at…

  15. Non-Native Speaker Interaction Management Strategies in a Network-Based Virtual Environment

    ERIC Educational Resources Information Center

    Peterson, Mark

    2008-01-01

    This article investigates the dyad-based communication of two groups of non-native speakers (NNSs) of English involved in real time interaction in a type of text-based computer-mediated communication (CMC) tool known as a MOO. The object of this semester long study was to examine the ways in which the subjects managed their L2 interaction during…

  16. An interactive three-dimensional virtual body structures system for anatomical training over the internet.

    PubMed

    Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram

    2006-04-01

    The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.

  17. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases.

    PubMed

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-04-15

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.

  18. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases

    PubMed Central

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-01-01

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907

  19. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    PubMed

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  20. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    PubMed

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Exploring Learning through Audience Interaction in Virtual Reality Dome Theaters

    NASA Astrophysics Data System (ADS)

    Apostolellis, Panagiotis; Daradoumis, Thanasis

    Informal learning in public spaces like museums, science centers and planetariums is increasingly popular during the last years. Recent advancements in large-scale displays allowed contemporary technology-enhanced museums to get equipped with digital domes, some with real-time capabilities like Virtual Reality systems. By conducting extensive literature review we have come to the conclusion that little to no research has been carried out on the leaning outcomes that the combination of VR and audience interaction can provide in the immersive environments of dome theaters. Thus, we propose that audience collaboration in immersive virtual reality environments presents a promising approach to support effective learning in groups of school aged children.

  2. Virtually supportive: a feasibility pilot study of an online support group for dementia caregivers in a 3D virtual environment.

    PubMed

    O'Connor, Mary-Frances; Arizmendi, Brian J; Kaszniak, Alfred W

    2014-08-01

    Caregiver support groups effectively reduce stress from caring for someone with dementia. These same demands can prevent participation in a group. The present feasibility study investigated a virtual online caregiver support group to bring the support group into the home. While online groups have been shown to be helpful, submissions to a message board (vs. live conversation) can feel impersonal. By using avatars, participants interacted via real-time chat in a virtual environment in an 8-week support group. Data indicated lower levels of perceived stress, depression and loneliness across participants. Importantly, satisfaction reports also indicate that caregivers overcame the barriers to participation, and had a strong sense of the group's presence. This study provides the framework for an accessible and low cost online support group for a dementia caregiver. The study demonstrates the feasibility of interactive group in a virtual environment for engaging members in meaningful interaction. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Time multiplexing for increased FOV and resolution in virtual reality

    NASA Astrophysics Data System (ADS)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  4. Generalized interactions using virtual tools within the spring framework: cutting

    NASA Technical Reports Server (NTRS)

    Montgomery, Kevin; Bruyns, Cynthia D.

    2002-01-01

    We present schemes for real-time generalized mesh cutting. Starting with the a basic example, we describe the details of implementing cutting on single and multiple surface objects as well as hybrid and volumetric meshes using virtual tools with single and multiple cutting surfaces. These methods have been implemented in a robust surgical simulation environment allowing us to model procedures ranging from animal dissection to cleft lip correction.

  5. The perception of spatial layout in real and virtual worlds.

    PubMed

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  6. Handling Massive Models: Representation, Real-Time Display and Interaction

    DTIC Science & Technology

    2008-09-16

    Published, K. Ward, N. Galoppo, and M. Lin, "Interactive Virtual Hair Salon ", Presence, p. , vol. , (2007). Published, K. Ward, F. Bertails, T.-Y...Detection for Deformable Models using Representative-Triangles", Symposium on Interactive 3D Graphics and Games , p. , vol. , (2008). Published...Interactive 3D Graphics and Games (I3D), p. , vol. , (2008). Published, Brandon Lloyd, Naga K. Govindaraju, Cory Quammen, Steven E. Molnar, Dinesh

  7. [A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation].

    PubMed

    Marescaux, J; Clément, J M; Nord, M; Russier, Y; Tassetti, V; Mutter, D; Cotin, S; Ayache, N

    1997-11-01

    Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world: virtual reality. Virtual reality extends the perception of our five senses by representing more than the real state of things by the means of computer sciences and robotics. It consists of three concepts: immersion, navigation and interaction. Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements: visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.

  8. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  9. Evaluation of procedural learning transfer from a virtual environment to a real situation: a case study on tank maintenance training.

    PubMed

    Ganier, Franck; Hoareau, Charlotte; Tisseau, Jacques

    2014-01-01

    Virtual reality opens new opportunities for operator training in complex tasks. It lowers costs and has fewer constraints than traditional training. The ultimate goal of virtual training is to transfer knowledge gained in a virtual environment to an actual real-world setting. This study tested whether a maintenance procedure could be learnt equally well by virtual-environment and conventional training. Forty-two adults were divided into three equally sized groups: virtual training (GVT® [generic virtual training]), conventional training (using a real tank suspension and preparation station) and control (no training). Participants then performed the procedure individually in the real environment. Both training types (conventional and virtual) produced similar levels of performance when the procedure was carried out in real conditions. Performance level for the two trained groups was better in terms of success and time taken to complete the task, time spent consulting job instructions and number of times the instructor provided guidance.

  10. Learning Protein Structure with Peers in an AR-Enhanced Learning Environment

    ERIC Educational Resources Information Center

    Chen, Yu-Chien

    2013-01-01

    Augmented reality (AR) is an interactive system that allows users to interact with virtual objects and the real world at the same time. The purpose of this dissertation was to explore how AR, as a new visualization tool, that can demonstrate spatial relationships by representing three dimensional objects and animations, facilitates students to…

  11. Effects of Pedagogical Agent Gestures on Social Acceptance and Learning: Virtual Real Relationships in an Elementary Foreign Language Classroom

    ERIC Educational Resources Information Center

    Davis, Robert; Antonenko, Pavlo

    2017-01-01

    Pedagogical agents (PAs) are lifelike characters in virtual environments that help facilitate learning through social interactions and the virtual real relationships with the learners. This study explored whether and how PA gesture design impacts learning and agent social acceptance when used with elementary students learning foreign language…

  12. 3-D surface reconstruction of patient specific anatomic data using a pre-specified number of polygons.

    PubMed

    Aharon, S; Robb, R A

    1997-01-01

    Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.

  13. Virtual probing system for medical volume data

    NASA Astrophysics Data System (ADS)

    Xiao, Yongfei; Fu, Yili; Wang, Shuguo

    2007-12-01

    Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.

  14. Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.

    PubMed

    Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee

    2015-12-01

    Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.

  15. Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults

    PubMed Central

    Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee

    2015-01-01

    Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233

  16. [Virtual + 1] * Reality

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi

    Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.

  17. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  18. Cybertherapy 2004: Using Interactive Media in Training and Therapeutic Interventions

    DTIC Science & Technology

    2005-03-01

    headphones, which delivered a soundscape updated in real time according to their movement in the virtual town. The sounds were produced through tracked...which delivered a pants were debriefed after each session but soundscape updated in real time according to were not informed about the content of the fol...cybersickness scale was used added to the soundscape . Ambisonics is a 4 to assess the level of discomfort after exposure channel audio format that embodies

  19. Enhancing Navigation Skills through Audio Gaming.

    PubMed

    Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi

    2010-01-01

    We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks.

  20. Enhancing Navigation Skills through Audio Gaming

    PubMed Central

    Sánchez, Jaime; Sáenz, Mauricio; Pascual-Leone, Alvaro; Merabet, Lotfi

    2014-01-01

    We present the design, development and initial cognitive evaluation of an Audio-based Environment Simulator (AbES). This software allows a blind user to navigate through a virtual representation of a real space for the purposes of training orientation and mobility skills. Our findings indicate that users feel satisfied and self-confident when interacting with the audio-based interface, and the embedded sounds allow them to correctly orient themselves and navigate within the virtual world. Furthermore, users are able to transfer spatial information acquired through virtual interactions into real world navigation and problem solving tasks. PMID:25505796

  1. Evaluation of two 3D virtual computer reconstructions for comparison of cleft lip and palate to normal fetal microanatomy.

    PubMed

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Helga, Fritsch; Wagner, Mathias

    2006-03-01

    Cleft lip and palate reconstructive surgery requires thorough knowledge of normal and pathological labial, palatal, and velopharyngeal anatomy. This study compared two software algorithms and their 3D virtual anatomical reconstruction because exact 3D micromorphological reconstruction may improve learning, reveal spatial relationships, and provide data for mathematical modeling. Transverse and frontal serial sections of the midface of 18 fetal specimens (11th to 32nd gestational week) were used for two manual segmentation approaches. The first manual segmentation approach used bitmap images and either Windows-based or Mac-based SURFdriver commercial software that allowed manual contour matching, surface generation with average slice thickness, 3D triangulation, and real-time interactive virtual 3D reconstruction viewing. The second manual segmentation approach used tagged image format and platform-independent prototypical SeViSe software developed by one of the authors (F.W.). Distended or compressed structures were dynamically transformed. Registration was automatic but allowed manual correction, such as individual section thickness, surface generation, and interactive virtual 3D real-time viewing. SURFdriver permitted intuitive segmentation, easy manual offset correction, and the reconstruction showed complex spatial relationships in real time. However, frequent software crashes and erroneous landmarks appearing "out of the blue," requiring manual correction, were tedious. Individual section thickness, defined smoothing, and unlimited structure number could not be integrated. The reconstruction remained underdimensioned and not sufficiently accurate for this study's reconstruction problem. SeViSe permitted unlimited structure number, late addition of extra sections, and quantified smoothing and individual slice thickness; however, SeViSe required more elaborate work-up compared to SURFdriver, yet detailed and exact 3D reconstructions were created.

  2. Virtual Diagnostics Interface: Real Time Comparison of Experimental Data and CFD Predictions for a NASA Ares I-Like Vehicle

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2007-01-01

    Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.

  3. Uterus models for use in virtual reality hysteroscopy simulators.

    PubMed

    Niederer, Peter; Weiss, Stephan; Caduff, Rosmarie; Bajka, Michael; Szekély, Gabor; Harders, Matthias

    2009-05-01

    Virtual reality models of human organs are needed in surgery simulators which are developed for educational and training purposes. A simulation can only be useful, however, if the mechanical performance of the system in terms of force-feedback for the user as well as the visual representation is realistic. We therefore aim at developing a mechanical computer model of the organ in question which yields realistic force-deformation behavior under virtual instrument-tissue interactions and which, in particular, runs in real time. The modeling of the human uterus is described as it is to be implemented in a simulator for minimally invasive gynecological procedures. To this end, anatomical information which was obtained from specially designed computed tomography and magnetic resonance imaging procedures as well as constitutive tissue properties recorded from mechanical testing were used. In order to achieve real-time performance, the combination of mechanically realistic numerical uterus models of various levels of complexity with a statistical deformation approach is suggested. In view of mechanical accuracy of such models, anatomical characteristics including the fiber architecture along with the mechanical deformation properties are outlined. In addition, an approach to make this numerical representation potentially usable in an interactive simulation is discussed. The numerical simulation of hydrometra is shown in this communication. The results were validated experimentally. In order to meet the real-time requirements and to accommodate the large biological variability associated with the uterus, a statistical modeling approach is demonstrated to be useful.

  4. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  5. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  6. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    ERIC Educational Resources Information Center

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  7. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  8. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  9. Interactive Virtual and Physical Manipulatives for Improving Students' Spatial Skills

    ERIC Educational Resources Information Center

    Ha, Oai; Fang, Ning

    2018-01-01

    An innovative educational technology called interactive virtual and physical manipulatives (VPM) is developed to improve students' spatial skills. With VPM technology, not only can students touch and play with real-world physical manipulatives in their hands but also they can see how the corresponding virtual manipulatives (i.e., computer…

  10. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  11. Comparative study on collaborative interaction in non-immersive and immersive systems

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki

    2007-09-01

    This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.

  12. Human-scale interaction for virtual model displays: a clear case for real tools

    NASA Astrophysics Data System (ADS)

    Williams, George C.; McDowall, Ian E.; Bolas, Mark T.

    1998-04-01

    We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.

  13. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    NASA Astrophysics Data System (ADS)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  14. An experimental study on CHVE's performance evaluation.

    PubMed

    Paiva, Paulo V F; Machado, Liliane S; Oliveira, Jauvane C

    2012-01-01

    Virtual reality-based training simulators, with collaborative capabilities, are known to improve the way users interact with one another while learning or improving skills on a given medical procedure. Performance evaluation of Collaborative Haptic Virtual Environments (CHVE) allows us to understand how such systems can work in the Internet, as well as the requirements for multisensorial and real-time data. This work discloses new performance evaluation results for the collaborative module of the CyberMed VR framework.

  15. The Role of Semantics in Next-Generation Online Virtual World-Based Retail Store

    NASA Astrophysics Data System (ADS)

    Sharma, Geetika; Anantaram, C.; Ghosh, Hiranmay

    Online virtual environments are increasingly becoming popular for entrepreneurship. While interactions are primarily between avatars, some interactions could occur through intelligent chatbots. Such interactions require connecting to backend business applications to obtain information, carry out real-world transactions etc. In this paper, we focus on integrating business application systems with virtual worlds. We discuss the probable features of a next-generation online virtual world-based retail store and the technologies involved in realizing the features of such a store. In particular, we examine the role of semantics in integrating popular virtual worlds with business applications to provide natural language based interactions.

  16. The Adaptive Effects Of Virtual Interfaces: Vestibulo-Ocular Reflex and Simulator Sickness.

    DTIC Science & Technology

    1998-08-07

    rearrangement: a pattern of stimulation differing from that existing as a result of normal interactions with the real world. Stimulus rearrangements can...is immersive and interactive . virtual interface: a system of transducers, signal processors, computer hardware and software that create an... interactive medium through which: 1) information is transmitted to the senses in the form of two- and three dimensional virtual images and 2) psychomotor

  17. Virtual Environments Using Video Capture for Social Phobia with Psychosis

    PubMed Central

    White, Richard; Clarke, Timothy; Turner, Ruth; Fowler, David

    2013-01-01

    Abstract A novel virtual environment (VE) system was developed and used as an adjunct to cognitive behavior therapy (CBT) with six socially anxious patients recovering from psychosis. The novel aspect of the VE system is that it uses video capture so the patients can see a life-size projection of themselves interacting with a specially scripted and digitally edited filmed environment played in real time on a screen in front of them. Within-session process outcomes (subjective units of distress and belief ratings on individual behavioral experiments), as well as patient feedback, generated the hypothesis that this type of virtual environment can potentially add value to CBT by helping patients understand the role of avoidance and safety behaviors in the maintenance of social anxiety and paranoia and by boosting their confidence to carry out “real-life” behavioral experiments. PMID:23659722

  18. Real Time Bicycle Simulation Study of Bicyclists’ Behaviors and their Implication on Safety

    DOT National Transportation Integrated Search

    2017-06-30

    The main goal of this study was to build a bicycle simulator and study the interaction between cyclists and other roadway users. The simulator developed was used in conjunction with Oculus Rift goggles to create a virtual cycling environment. The vir...

  19. A 3D virtual reality simulator for training of minimally invasive surgery.

    PubMed

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  20. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  1. Interreality: A New Paradigm for E-health.

    PubMed

    Riva, Giuseppe

    2009-01-01

    "Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.

  2. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    PubMed

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  3. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    PubMed Central

    Parsons, Thomas D.

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869

  4. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences.

    PubMed

    Parsons, Thomas D

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.

  5. IMMERSE: Interactive Mentoring for Multimodal Experiences in Realistic Social Encounters

    DTIC Science & Technology

    2015-08-28

    undergraduates funded by your agreement who graduated during this period and will receive scholarships or fellowships for further studies in science... Player Locomotion 9.2 Interacting with Real and Virtual Objects 9.3 Animation Combinations and Stage Management 10. Recommendations on the Way Ahead...Interaction with Virtual Characters ................................52! 9.1! Player Locomotion

  6. Virtual Titrator: A Student-Oriented Instrument.

    ERIC Educational Resources Information Center

    Ritter, David; Johnson, Michael

    1997-01-01

    Describes a titrator system, constructed from a computer-interfaced pH-meter, that was designed to increase student involvement in the process. Combines automatic data collection with real-time graphical display and interactive controls to focus attention on the process rather than on bits of data. Improves understanding of concepts and…

  7. Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces.

    PubMed

    Culbertson, Heather; Kuchenbecker, Katherine J

    2017-01-01

    Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.

  8. Development and Use of a Virtual NMR Facility

    NASA Astrophysics Data System (ADS)

    Keating, Kelly A.; Myers, James D.; Pelton, Jeffrey G.; Bair, Raymond A.; Wemmer, David E.; Ellis, Paul D.

    2000-03-01

    We have developed a "virtual NMR facility" (VNMRF) to enhance access to the NMR spectrometers in Pacific Northwest National Laboratory's Environmental Molecular Sciences Laboratory (EMSL). We use the term virtual facility to describe a real NMR facility made accessible via the Internet. The VNMRF combines secure remote operation of the EMSL's NMR spectrometers over the Internet with real-time videoconferencing, remotely controlled laboratory cameras, real-time computer display sharing, a Web-based electronic laboratory notebook, and other capabilities. Remote VNMRF users can see and converse with EMSL researchers, directly and securely control the EMSL spectrometers, and collaboratively analyze results. A customized Electronic Laboratory Notebook allows interactive Web-based access to group notes, experimental parameters, proposed molecular structures, and other aspects of a research project. This paper describes our experience developing a VNMRF and details the specific capabilities available through the EMSL VNMRF. We show how the VNMRF has evolved during a test project and present an evaluation of its impact in the EMSL and its potential as a model for other scientific facilities. All Collaboratory software used in the VNMRF is freely available from http://www.emsl.pnl.gov:2080/docs/collab.

  9. A Virtual Education: Guidelines for Using Games Technology

    ERIC Educational Resources Information Center

    Schofield, Damian

    2014-01-01

    Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online vir-tual environments. This technology has been used to generate a range of interactive Virtual Real-ity (VR) learning environments across a spectrum of…

  10. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-06-01

    This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

  11. Interactive voxel graphics in virtual reality

    NASA Astrophysics Data System (ADS)

    Brody, Bill; Chappell, Glenn G.; Hartman, Chris

    2002-06-01

    Interactive voxel graphics in virtual reality poses significant research challenges in terms of interface, file I/O, and real-time algorithms. Voxel graphics is not so new, as it is the focus of a good deal of scientific visualization. Interactive voxel creation and manipulation is a more innovative concept. Scientists are understandably reluctant to manipulate data. They collect or model data. A scientific analogy to interactive graphics is the generation of initial conditions for some model. It is used as a method to test those models. We, however, are in the business of creating new data in the form of graphical imagery. In our endeavor, science is a tool and not an end. Nevertheless, there is a whole class of interactions and associated data generation scenarios that are natural to our way of working and that are also appropriate to scientific inquiry. Annotation by sketching or painting to point to and distinguish interesting and important information is very significant for science as well as art. Annotation in 3D is difficult without a good 3D interface. Interactive graphics in virtual reality is an appropriate approach to this problem.

  12. Validation of virtual reality as a tool to understand and prevent child pedestrian injury.

    PubMed

    Schwebel, David C; Gaines, Joanna; Severson, Joan

    2008-07-01

    In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.

  13. Intelligent Motion and Interaction Within Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)

    2007-01-01

    What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.

  14. Design of an efficient framework for fast prototyping of customized human-computer interfaces and virtual environments for rehabilitation.

    PubMed

    Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe

    2013-06-01

    Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Virtual Worlds: Relationship between Real Life and Experience in Second Life

    ERIC Educational Resources Information Center

    Anstadt, Scott P.; Bradley, Shannon; Burnette, Ashley; Medley, Lesley L.

    2013-01-01

    Due to the unique applications of virtual reality in many modern contexts, Second Life (SL) offers inimitable opportunities for research and exploration and experiential learning as part of a distance learning curriculum assignment. A review of current research regarding SL examined real world social influences in online interactions and what the…

  16. Emerging technology in surgical education: combining real-time augmented reality and wearable computing devices.

    PubMed

    Ponce, Brent A; Menendez, Mariano E; Oladeji, Lasun O; Fryberger, Charles T; Dantuluri, Phani K

    2014-11-01

    The authors describe the first surgical case adopting the combination of real-time augmented reality and wearable computing devices such as Google Glass (Google Inc, Mountain View, California). A 66-year-old man presented to their institution for a total shoulder replacement after 5 years of progressive right shoulder pain and decreased range of motion. Throughout the surgical procedure, Google Glass was integrated with the Virtual Interactive Presence and Augmented Reality system (University of Alabama at Birmingham, Birmingham, Alabama), enabling the local surgeon to interact with the remote surgeon within the local surgical field. Surgery was well tolerated by the patient and early surgical results were encouraging, with an improvement of shoulder pain and greater range of motion. The combination of real-time augmented reality and wearable computing devices such as Google Glass holds much promise in the field of surgery. Copyright 2014, SLACK Incorporated.

  17. Fire training in a virtual-reality environment

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Jurgen; Bucken, Arno

    2005-03-01

    Although fire is very common in our daily environment - as a source of energy at home or as a tool in industry - most people cannot estimate the danger of a conflagration. Therefore it is important to train people in combating fire. Beneath training with propane simulators or real fires and real extinguishers, fire training can be performed in virtual reality, which means a pollution-free and fast way of training. In this paper we describe how to enhance a virtual-reality environment with a real-time fire simulation and visualisation in order to establish a realistic emergency-training system. The presented approach supports extinguishing of the virtual fire including recordable performance data as needed in teletraining environments. We will show how to get realistic impressions of fire using advanced particle-simulation and how to use the advantages of particles to trigger states in a modified cellular automata used for the simulation of fire-behaviour. Using particle systems that interact with cellular automata it is possible to simulate a developing, spreading fire and its reaction on different extinguishing agents like water, CO2 or oxygen. The methods proposed in this paper have been implemented and successfully tested on Cosimir, a commercial robot-and VR-simulation-system.

  18. Development of a virtual speaking simulator using Image Based Rendering.

    PubMed

    Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I

    2002-01-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.

  19. Interactive Plasma Physics Education Using Data from Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Calderon, Brisa; Davis, Bill; Zwicker, Andrew

    2010-11-01

    The Internet Plasma Physics Education Experience (IPPEX) website was created in 1996 to give users access to data from plasma and fusion experiments. Interactive material on electricity, magnetism, matter, and energy was presented to generate interest and prepare users to understand data from a fusion experiment. Initially, users were allowed to analyze real-time and archival data from the Tokamak Fusion Test Reactor (TFTR) experiment. IPPEX won numerous awards for its novel approach of allowing users to participate in ongoing research. However, the latest revisions of IPPEX were in 2001 and the interactive material is no longer functional on modern browsers. Also, access to real-time data was lost when TFTR was shut down. The interactive material on IPPEX is being rewritten in ActionScript3.0, and real-time and archival data from the National Spherical Tokamak Experiment (NSTX) will be made available to users. New tools like EFIT animations, fast cameras, and plots of important plasma parameters will be included along with an existing Java-based ``virtual tokamak.'' Screenshots from the upgraded website and future directions will be presented.

  20. V-Man Generation for 3-D Real Time Animation. Chapter 5

    NASA Technical Reports Server (NTRS)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  1. A comparison of older adults' subjective experiences with virtual and real environments during dynamic balance activities.

    PubMed

    Proffitt, Rachel; Lange, Belinda; Chen, Christina; Winstein, Carolee

    2015-01-01

    The purpose of this study was to explore the subjective experience of older adults interacting with both virtual and real environments. Thirty healthy older adults engaged with real and virtual tasks of similar motor demands: reaching to a target in standing and stepping stance. Immersive tendencies and absorption scales were administered before the session. Game engagement and experience questionnaires were completed after each task, followed by a semistructured interview at the end of the testing session. Data were analyzed respectively using paired t tests and grounded theory methodology. Participants preferred the virtual task over the real task. They also reported an increase in presence and absorption with the virtual task, describing an external focus of attention. Findings will be used to inform future development of appropriate game-based balance training applications that could be embedded in the home or community settings as part of evidence-based fall prevention programs.

  2. [Tumor Data Interacted System Design Based on Grid Platform].

    PubMed

    Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke

    2016-06-01

    In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.

  3. The development of a collaborative virtual environment for finite element simulation

    NASA Astrophysics Data System (ADS)

    Abdul-Jalil, Mohamad Kasim

    Communication between geographically distributed designers has been a major hurdle in traditional engineering design. Conventional methods of communication, such as video conferencing, telephone, and email, are less efficient especially when dealing with complex design models. Complex shapes, intricate features and hidden parts are often difficult to describe verbally or even using traditional 2-D or 3-D visual representations. Virtual Reality (VR) and Internet technologies have provided a substantial potential to bridge the present communication barrier. VR technology allows designers to immerse themselves in a virtual environment to view and manipulate this model just as in real-life. Fast Internet connectivity has enabled fast data transfer between remote locations. Although various collaborative virtual environment (CVE) systems have been developed in the past decade, they are limited to high-end technology that is not accessible to typical designers. The objective of this dissertation is to discover and develop a new approach to increase the efficiency of the design process, particularly for large-scale applications wherein participants are geographically distributed. A multi-platform and easily accessible collaborative virtual environment (CVRoom), is developed to accomplish the stated research objective. Geographically dispersed designers can meet in a single shared virtual environment to discuss issues pertaining to the engineering design process and to make trade-off decisions more quickly than before, thereby speeding the entire process. This 'faster' design process will be achieved through the development of capabilities to better enable the multidisciplinary and modeling the trade-off decisions that are so critical before launching into a formal detailed design. The features of the environment developed as a result of this research include the ability to view design models, use voice interaction, and to link engineering analysis modules (such as Finite Element Analysis module, such as is demonstrated in this work). One of the major issues in developing a CVE system for engineering design purposes is to obtain any pertinent simulation results in real-time. This is critical so that the designers can make decisions based on these results quickly. For example, in a finite element analysis, if a design model is changed or perturbed, the analysis results must be obtained in real-time or near real-time to make the virtual meeting environment realistic. In this research, the finite difference-based Design Sensitivity Analysis (DSA) approach is employed to approximate structural responses (i.e. stress, displacement, etc), so as to demonstrate the applicability of CVRoom for engineering design trade-offs. This DSA approach provides for fast approximation and is well-suited for the virtual meeting environment where fast response time is required. The DSA-based approach is tested on several example test problems to show its applicability and limitations. This dissertation demonstrates that an increase in efficiency and reduction of time required for a complex design processing can be accomplished using the approach developed in this dissertation research. Several implementations of CVRoom by students working on common design tasks were investigated. All participants confirmed the preference of using the collaborative virtual environment developed in this dissertation work (CVRoom) over other modes of interactions. It is proposed here that CVRoom is representative of the type of collaborative virtual environment that will be used by most designers in the future to reduce the time required in a design cycle and thereby reduce the associated cost.

  4. Building interactive virtual environments for simulated training in medicine using VRML and Java/JavaScript.

    PubMed

    Korocsec, D; Holobar, A; Divjak, M; Zazula, D

    2005-12-01

    Medicine is a difficult thing to learn. Experimenting with real patients should not be the only option; simulation deserves a special attention here. Virtual Reality Modelling Language (VRML) as a tool for building virtual objects and scenes has a good record of educational applications in medicine, especially for static and animated visualisations of body parts and organs. However, to create computer simulations resembling situations in real environments the required level of interactivity and dynamics is difficult to achieve. In the present paper we describe some approaches and techniques which we used to push the limits of the current VRML technology further toward dynamic 3D representation of virtual environments (VEs). Our demonstration is based on the implementation of a virtual baby model, whose vital signs can be controlled from an external Java application. The main contributions of this work are: (a) outline and evaluation of the three-level VRML/Java implementation of the dynamic virtual environment, (b) proposal for a modified VRML Timesensor node, which greatly improves the overall control of system performance, and (c) architecture of the prototype distributed virtual environment for training in neonatal resuscitation comprising the interactive virtual newborn, active bedside monitor for vital signs and full 3D representation of the surgery room.

  5. Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2005-01-01

    Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.

  6. Virtual Reality for Research in Social Neuroscience

    PubMed Central

    Parsons, Thomas D.; Gaggioli, Andrea; Riva, Giuseppe

    2017-01-01

    The emergence of social neuroscience has significantly advanced our understanding of the relationship that exists between social processes and their neurobiological underpinnings. Social neuroscience research often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and social interactions. Whilst this research has merit, there is a growing interest in the presentation of dynamic stimuli in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Herein, we discuss the potential of virtual reality for enhancing ecological validity while maintaining experimental control in social neuroscience research. Virtual reality is a technology that allows for the creation of fully interactive, three-dimensional computerized models of social situations that can be fully controlled by the experimenter. Furthermore, the introduction of interactive virtual characters—either driven by a human or by a computer—allows the researcher to test, in a systematic and independent manner, the effects of various social cues. We first introduce key technical features and concepts related to virtual reality. Next, we discuss the potential of this technology for enhancing social neuroscience protocols, drawing on illustrative experiments from the literature. PMID:28420150

  7. Coarse-Grained Model for Water Involving a Virtual Site.

    PubMed

    Deng, Mingsen; Shen, Hujun

    2016-02-04

    In this work, we propose a new coarse-grained (CG) model for water by combining the features of two popular CG water models (BMW and MARTINI models) as well as by adopting a topology similar to that of the TIP4P water model. In this CG model, a CG unit, representing four real water molecules, consists of a virtual site, two positively charged particles, and a van der Waals (vdW) interaction center. Distance constraint is applied to the bonds formed between the vdW interaction center and the positively charged particles. The virtual site, which carries a negative charge, is determined by the locations of the two positively charged particles and the vdW interaction center. For the new CG model of water, we coined the name "CAVS" (charge is attached to a virtual site) due to the involvment of the virtual site. After being tested in molecular dynamic (MD) simulations of bulk water at various time steps, under different temperatures and in different salt (NaCl) concentrations, the CAVS model offers encouraging predictions for some bulk properties of water (such as density, dielectric constant, etc.) when compared to experimental ones.

  8. Virtual Reality for Research in Social Neuroscience.

    PubMed

    Parsons, Thomas D; Gaggioli, Andrea; Riva, Giuseppe

    2017-04-16

    The emergence of social neuroscience has significantly advanced our understanding of the relationship that exists between social processes and their neurobiological underpinnings. Social neuroscience research often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and social interactions. Whilst this research has merit, there is a growing interest in the presentation of dynamic stimuli in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Herein, we discuss the potential of virtual reality for enhancing ecological validity while maintaining experimental control in social neuroscience research. Virtual reality is a technology that allows for the creation of fully interactive, three-dimensional computerized models of social situations that can be fully controlled by the experimenter. Furthermore, the introduction of interactive virtual characters-either driven by a human or by a computer-allows the researcher to test, in a systematic and independent manner, the effects of various social cues. We first introduce key technical features and concepts related to virtual reality. Next, we discuss the potential of this technology for enhancing social neuroscience protocols, drawing on illustrative experiments from the literature.

  9. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  10. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.

    PubMed

    Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.

  11. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments

    PubMed Central

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691

  12. An Interactive Logistics Centre Information Integration System Using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Hong, S.; Mao, B.

    2018-04-01

    The logistics industry plays a very important role in the operation of modern cities. Meanwhile, the development of logistics industry has derived various problems that are urgent to be solved, such as the safety of logistics products. This paper combines the study of logistics industry traceability and logistics centre environment safety supervision with virtual reality technology, creates an interactive logistics centre information integration system. The proposed system utilizes the immerse characteristic of virtual reality, to simulate the real logistics centre scene distinctly, which can make operation staff conduct safety supervision training at any time without regional restrictions. On the one hand, a large number of sensor data can be used to simulate a variety of disaster emergency situations. On the other hand, collecting personnel operation data, to analyse the improper operation, which can improve the training efficiency greatly.

  13. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2009-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  14. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2010-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  15. An Interactive Cultural Transect: Designing, Deploying, and Evaluating an Online Virtual-Abroad Learning Experience

    ERIC Educational Resources Information Center

    Peters, Phil; Katsaros, Alex; Howard, Rosalyn; Lindgren, Robb

    2012-01-01

    This pilot project conducted by researchers from the University of Central Florida (UCF) seeks to answer the question: Does a real-time, two-way, mobile, remote webcasting system have special properties for learning compared with traditional distance learning platforms? Students enrolled in two online, undergraduate UCF courses explored South…

  16. Methods and systems relating to an augmented virtuality environment

    DOEpatents

    Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J

    2014-05-20

    Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.

  17. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the "EyeCane": Feasibility Study

    PubMed Central

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel-Robert; Amedi, Amir

    2013-01-01

    Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments. PMID:23977316

  18. The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning

    ERIC Educational Resources Information Center

    Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar

    2017-01-01

    Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…

  19. Virtual Events: A Cyberspace Resource for Educators.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    1998-01-01

    Discusses how virtual events can be used to enhance education. Topics include balancing virtual and real encounters; finding the best mix of communication options; and finding patterns of interaction that support reflective cognition, knowledge amplification, community-building, learning, and global understanding. GLOBENET 1997, an international…

  20. Exploration of Metaphorical and Contextual Affect Sensing in a Virtual Improvisational Drama

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    Real-time affect detection from open-ended text-based dialogue is challenging but essential for the building of effective intelligent user interfaces. In this paper, we report updated developments of an affect detection model from text, including affect detection from one particular type of metaphorical affective expression (cooking metaphor) and affect detection based on context. The overall affect detection model has been embedded in an intelligent conversational AI agent interacting with human users under loose scenarios. Evaluation for the updated affect detection component is also provided. Our work contributes to the conference themes on engagement and emotion, interactions in games, storytelling and narrative in education, and virtual characters/agents development.

  1. Environments for online maritime simulators with cloud computing capabilities

    NASA Astrophysics Data System (ADS)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  2. Avatars, Virtual Reality Technology, and the U.S. Military: Emerging Policy Issues

    DTIC Science & Technology

    2008-04-09

    called “ Sentient Worldwide Simulation,” which will “mirror” real life and automatically follow real-world events in real time. Some virtual world...cities, with the final goal of creating a fully functioning virtual model of the entire world, which will be known as the Sentient Worldwide Simulation

  3. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  4. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  5. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  6. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.

  7. Distributed virtual environment for emergency medical training

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.

    1997-07-01

    In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.

  8. Virtual reality, augmented reality, and robotics applied to digestive operative procedures: from in vivo animal preclinical studies to clinical use

    NASA Astrophysics Data System (ADS)

    Soler, Luc; Marescaux, Jacques

    2006-04-01

    Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.

  9. A spatially augmented reality sketching interface for architectural daylighting design.

    PubMed

    Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara

    2011-01-01

    We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society

  10. Virtual hydrology observatory: an immersive visualization of hydrology modeling

    NASA Astrophysics Data System (ADS)

    Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas

    2009-02-01

    The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.

  11. A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city

    NASA Astrophysics Data System (ADS)

    Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan

    2013-04-01

    We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.

  12. Using a Virtual Store As a Research Tool to Investigate Consumer In-store Behavior.

    PubMed

    Ploydanai, Kunalai; van den Puttelaar, Jos; van Herpen, Erica; van Trijp, Hans

    2017-07-24

    People's responses to products and/or choice environments are crucial to understanding in-store consumer behaviors. Currently, there are various approaches (e.g., surveys or laboratory settings) to study in-store behaviors, but the external validity of these is limited by their poor capability to resemble realistic choice environments. In addition, building a real store to meet experimental conditions while controlling for undesirable effects is costly and highly difficult. A virtual store developed by virtual reality techniques potentially transcends these limitations by offering the simulation of a 3D virtual store environment in a realistic, flexible, and cost-efficient way. In particular, a virtual store interactively allows consumers (participants) to experience and interact with objects in a tightly controlled yet realistic setting. This paper presents the key elements of using a desktop virtual store to study in-store consumer behavior. Descriptions of the protocol steps to: 1) build the experimental store, 2) prepare the data management program, 3) run the virtual store experiment, and 4) organize and export data from the data management program are presented. The virtual store enables participants to navigate through the store, choose a product from alternatives, and select or return products. Moreover, consumer-related shopping behaviors (e.g., shopping time, walking speed, and number and type of products examined and bought) can also be collected. The protocol is illustrated with an example of a store layout experiment showing that shelf length and shelf orientation influence shopping- and movement-related behaviors. This demonstrates that the use of a virtual store facilitates the study of consumer responses. The virtual store can be especially helpful when examining factors that are costly or difficult to change in real life (e.g., overall store layout), products that are not presently available in the market, and routinized behaviors in familiar environments.

  13. Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy

    PubMed Central

    Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca

    2014-01-01

    Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260

  14. Input Devices and Interaction Techniques for VR-Enhanced Medicine

    NASA Astrophysics Data System (ADS)

    Gallo, Luigi; Pietro, Giuseppe De

    Virtual Reality (VR) technologies make it possible to reproduce faithfully real life events in computer-generated scenarios. This approach has the potential to simplify the way people solve problems, since they can take advantage of their real life experiences while interacting in synthetic worlds.

  15. Virtual Proprioception for eccentric training.

    PubMed

    LeMoyne, Robert; Mastroianni, Timothy

    2017-07-01

    Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.

  16. Visual Environment for Designing Interactive Learning Scenarios with Augmented Reality

    ERIC Educational Resources Information Center

    Mota, José Miguel; Ruiz-Rube, Iván; Dodero, Juan Manuel; Figueiredo, Mauro

    2016-01-01

    Augmented Reality (AR) technology allows the inclusion of virtual elements on a vision of actual physical environment for the creation of a mixed reality in real time. This kind of technology can be used in educational settings. However, the current AR authoring tools present several drawbacks, such as, the lack of a mechanism for tracking the…

  17. Dynamic phenomena and human activity in an artificial society

    NASA Astrophysics Data System (ADS)

    Grabowski, A.; Kruszewska, N.; Kosiński, R. A.

    2008-12-01

    We study dynamic phenomena in a large social network of nearly 3×104 individuals who interact in the large virtual world of a massive multiplayer online role playing game. On the basis of a database received from the online game server, we examine the structure of the friendship network and human dynamics. To investigate the relation between networks of acquaintances in virtual and real worlds, we carried out a survey among the players. We show that, even though the virtual network did not develop as a growing graph of an underlying network of social acquaintances in the real world, it influences it. Furthermore we find very interesting scaling laws concerning human dynamics. Our research shows how long people are interested in a single task and how much time they devote to it. Surprisingly, exponent values in both cases are close to -1 . We calculate the activity of individuals, i.e., the relative time daily devoted to interactions with others in the artificial society. Our research shows that the distribution of activity is not uniform and is highly correlated with the degree of the node, and that such human activity has a significant influence on dynamic phenomena, e.g., epidemic spreading and rumor propagation, in complex networks. We find that spreading is accelerated (an epidemic) or decelerated (a rumor) as a result of superspreaders’ various behavior.

  18. Integration of stereotactic ultrasonic data into an interactive image-guided neurosurgical system

    NASA Astrophysics Data System (ADS)

    Shima, Daniel W.; Galloway, Robert L., Jr.

    1998-06-01

    Stereotactic ultrasound can be incorporated into an interactive, image-guide neurosurgical system by using an optical position sensor to define the location of an intraoperative scanner in physical space. A C-program has been developed that communicates with the OptotrakTM system developed by Northern Digital Inc. to optically track the three-dimensional position and orientation of a fan-shaped area defined with respect to a hand-held probe. (i.e., a virtual B-mode ultrasound fan beam) Volumes of CT and MR head scans from the same patient are registered to a location in physical space using a point-based technique. The coordinates of the virtual fan beam in physical space are continuously calculated and updated on-the-fly. During each program loop, the CT and MR data volumes are reformatted along the same plane and displayed as two fan-shaped images that correspond to the current physical-space location of the virtual fan beam. When the reformatted preoperative tomographic images are eventually paired with a real-time intraoperative ultrasound image, a neurosurgeon will be able to use the unique information of each imaging modality (e.g., the high resolution and tissue contrast of CT and MR and the real-time functionality of ultrasound) in a complementary manner to identify structures in the brain more easily and to guide surgical procedures more effectively.

  19. A comparison of older adults' subjective experience with virtual and real environments during dynamic balance activities

    PubMed Central

    Proffitt, Rachel; Lange, Belinda; Chen, Christina; Winstein, Carolee

    2014-01-01

    The purpose of this study was to explore the subjective experience of older adults interacting with both virtual and real environments. Thirty healthy older adults engaged with real and virtual tasks of similar motor demands: reaching to a target in standing and stepping stance. Immersive tendencies and absorption scales were administered before the session. Game engagement and experience questionnaires were completed after each task, followed by a semi-structured interview at the end of the testing session. Data were analyzed respectively using paired t-tests and grounded theory methodology. Participants preferred the virtual task over the real task. They also reported an increase in presence and absorption with the virtual task, describing an external focus of attention. Findings will be used to inform future development of appropriate game-based balance training applications that could be embedded in the home or community settings as part of evidence-based fall prevention programs. PMID:24334299

  20. Brain activity during a lower limb functional task in a real and virtual environment: A comparative study.

    PubMed

    Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa

    2017-01-01

    Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.

  1. Quantifying human-environment interactions using videography in the context of infectious disease transmission.

    PubMed

    Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A

    2018-05-08

    Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.

  2. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  3. Colonoscopy procedure simulation: virtual reality training based on a real time computational approach.

    PubMed

    Wen, Tingxi; Medveczky, David; Wu, Jackie; Wu, Jianhuang

    2018-01-25

    Colonoscopy plays an important role in the clinical screening and management of colorectal cancer. The traditional 'see one, do one, teach one' training style for such invasive procedure is resource intensive and ineffective. Given that colonoscopy is difficult, and time-consuming to master, the use of virtual reality simulators to train gastroenterologists in colonoscopy operations offers a promising alternative. In this paper, a realistic and real-time interactive simulator for training colonoscopy procedure is presented, which can even include polypectomy simulation. Our approach models the colonoscopy as thick flexible elastic rods with different resolutions which are dynamically adaptive to the curvature of the colon. More material characteristics of this deformable material are integrated into our discrete model to realistically simulate the behavior of the colonoscope. We present a simulator for training colonoscopy procedure. In addition, we propose a set of key aspects of our simulator that give fast, high fidelity feedback to trainees. We also conducted an initial validation of this colonoscopic simulator to determine its clinical utility and efficacy.

  4. Immersive Environments - A Connectivist Approach

    NASA Astrophysics Data System (ADS)

    Loureiro, Ana; Bettencourt, Teresa

    We are conducting a research project with the aim of achieving better and more efficient ways to facilitate teaching and learning in Higher Level Education. We have chosen virtual environments, with particular emphasis to Second Life® platform augmented by web 2.0 tools, to develop the study. The Second Life® environment has some interesting characteristics that captured our attention, it is immersive; it is a real world simulator; it is a social network; it allows real time communication, cooperation, collaboration and interaction; it is a safe and controlled environment. We specifically chose tools from web 2.0 that enable sharing and collaborative way of learning. Through understanding the characteristics of this learning environment, we believe that immersive learning along with other virtual tools can be integrated in today's pedagogical practices.

  5. Real-Time Simulation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Coryphaeus Software, founded in 1989 by former NASA electronic engineer Steve Lakowske, creates real-time 3D software. Designer's Workbench, the company flagship product, is a modeling and simulation tool for the development of both static and dynamic 3D databases. Other products soon followed. Activation, specifically designed for game developers, allows developers to play and test the 3D games before they commit to a target platform. Game publishers can shorten development time and prove the "playability" of the title, maximizing their chances of introducing a smash hit. Another product, EasyT, lets users create massive, realistic representation of Earth terrains that can be viewed and traversed in real time. Finally, EasyScene software control the actions among interactive objects within a virtual world. Coryphaeus products are used on Silican Graphics workstation and supercomputers to simulate real-world performance in synthetic environments. Customers include aerospace, aviation, architectural and engineering firms, game developers, and the entertainment industry.

  6. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    PubMed

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  7. Virtually compliant: Immersive video gaming increases conformity to false computer judgments.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen; Sharma, Dinkar; Gonidis, Lazaros

    2015-08-01

    Real-life encounters with face-to-face contact are on the decline in a world in which many routine tasks are delegated to virtual characters-a development that bears both opportunities and risks. Interacting with such virtual-reality beings is particularly common during role-playing videogames, in which we incarnate into the virtual reality of an avatar. Video gaming is known to lead to the training and development of real-life skills and behaviors; hence, in the present study we sought to explore whether role-playing video gaming primes individuals' identification with a computer enough to increase computer-related social conformity. Following immersive video gaming, individuals were indeed more likely to give up their own best judgment and to follow the vote of computers, especially when the stimulus context was ambiguous. Implications for human-computer interactions and for our understanding of the formation of identity and self-concept are discussed.

  8. Smart Classroom: Bringing Pervasive Computing into Distance Learning

    NASA Astrophysics Data System (ADS)

    Shi, Yuanchun; Qin, Weijun; Suo, Yue; Xiao, Xin

    In recent years, distance learning has increasingly become one of themost important applications on the internet and is being discussed and studied by various universities, institutes and companies. The Web/Internet provides relatively easy ways to publish hyper-linked multimedia content for more audiences. Yet, we find that most of the courseware are simply shifted from textbook to HTML files. However, in ost cases the teacher's live instruction is very important for catching the attention and interest of the students. That's why Real-Time Interactive Virtual Classroom (RTIVC) always plays an indispensable role in distance learning, where teachers nd students located in different places can take part in the class synchronously through certain multimedia communication systems and obtain real-time and mediarich interactions using Pervasive Computing technologies [1]. The Classroom 2000 project [2] at GIT has been devoted to the automated capturing of the classroom experience. Likewise, the Smart Classroom project [3] at our institute is focused on Tele-education. Most currently deployed real-time Tele-education systems are desktop-based, in which the teacher's experience is totally different from teaching in a real classroom.

  9. Virtual Incarnations: An Exploration of Internet-Mediated Interaction as Manifestation of the Divine

    ERIC Educational Resources Information Center

    Lytle, Julie Anne

    2010-01-01

    As faith communities are moving online and creating virtual churches, one widespread critique is the disembodied nature of online relationships. Citing fears of engagement with others who are misrepresenting themselves, many argue that virtual churches are not "real" and Internet-mediated communications (IMC) should not be incorporated into faith…

  10. Social Presence and Motivation in a Three-Dimensional Virtual World: An Explanatory Study

    ERIC Educational Resources Information Center

    Yilmaz, Rabia M.; Topu, F. Burcu; Goktas, Yuksel; Coban, Murat

    2013-01-01

    Three-dimensional (3-D) virtual worlds differ from other learning environments in their similarity to real life, providing opportunities for more effective communication and interaction. With these features, 3-D virtual worlds possess considerable potential to enhance learning opportunities. For effective learning, the users' motivation levels and…

  11. Virtually Naked: Virtual Environment Reveals Sex-Dependent Nature of Skin Disclosure

    PubMed Central

    Lomanowska, Anna M.; Guitton, Matthieu J.

    2012-01-01

    The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings. PMID:23300580

  12. Virtually naked: virtual environment reveals sex-dependent nature of skin disclosure.

    PubMed

    Lomanowska, Anna M; Guitton, Matthieu J

    2012-01-01

    The human tendency to reveal or cover naked skin reflects a competition between the individual propensity for social interactions related to sexual appeal and interpersonal touch versus climatic, environmental, physical, and cultural constraints. However, due to the ubiquitous nature of these constraints, isolating on a large scale the spontaneous human tendency to reveal naked skin has remained impossible. Using the online 3-dimensional virtual world of Second Life, we examined spontaneous human skin-covering behavior unhindered by real-world climatic, environmental, and physical variables. Analysis of hundreds of avatars revealed that virtual females disclose substantially more naked skin than virtual males. This phenomenon was not related to avatar hypersexualization as evaluated by measurement of sexually dimorphic body proportions. Furthermore, analysis of skin-covering behavior of a population of culturally homogeneous avatars indicated that the propensity of female avatars to reveal naked skin persisted despite explicit cultural norms promoting less revealing attire. These findings have implications for further understanding how sex-specific aspects of skin disclosure influence human social interactions in both virtual and real settings.

  13. Motor learning from virtual reality to natural environments in individuals with Duchenne muscular dystrophy.

    PubMed

    Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello

    2017-11-10

    To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.

  14. Mobile devices, Virtual Reality, Augmented Reality, and Digital Geoscience Education.

    NASA Astrophysics Data System (ADS)

    Crompton, H.; De Paor, D. G.; Whitmeyer, S. J.; Bentley, C.

    2016-12-01

    Mobile devices are playing an increasing role in geoscience education. Affordances include instructor-student communication and class management in large classrooms, virtual and augmented reality applications, digital mapping, and crowd-sourcing. Mobile technologies have spawned the sub field of mobile learning or m-learning, which is defined as learning across multiple contexts, through social and content interactions. Geoscientists have traditionally engaged in non-digital mobile learning via fieldwork, but digital devices are greatly extending the possibilities, especially for non-traditional students. Smartphones and tablets are the most common devices but smart glasses such as Pivothead enable live streaming of a first-person view (see for example, https://youtu.be/gWrDaYP5w58). Virtual reality headsets such as Google Cardboard create an immersive virtual field experience and digital imagery such as GigaPan and Structure from Motion enables instructors and/or students to create virtual specimens and outcrops that are sharable across the globe. Whereas virtual reality (VR) replaces the real world with a virtual representation, augmented reality (AR) overlays digital data on the live scene visible to the user in real time. We have previously reported on our use of the AR application called FreshAiR for geoscientific "egg hunts." The popularity of Pokémon Go demonstrates the potential of AR for mobile learning in the geosciences.

  15. Interactive Visualization of Near Real-Time and Production Global Precipitation Mission Data Online Using CesiumJS

    NASA Astrophysics Data System (ADS)

    Lammers, M.

    2016-12-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  16. Interactive Visualization of Near Real Time and Production Global Precipitation Measurement (GPM) Mission Data Online Using CesiumJS

    NASA Technical Reports Server (NTRS)

    Lammers, Matthew

    2016-01-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.

  17. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  18. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  19. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  20. PiCO QL: A software library for runtime interactive queries on program data

    NASA Astrophysics Data System (ADS)

    Fragkoulis, Marios; Spinellis, Diomidis; Louridas, Panos

    PiCO QL is an open source C/C++ software whose scientific scope is real-time interactive analysis of in-memory data through SQL queries. It exposes a relational view of a system's or application's data structures, which is queryable through SQL. While the application or system is executing, users can input queries through a web-based interface or issue web service requests. Queries execute on the live data structures through the respective relational views. PiCO QL makes a good candidate for ad-hoc data analysis in applications and for diagnostics in systems settings. Applications of PiCO QL include the Linux kernel, the Valgrind instrumentation framework, a GIS application, a virtual real-time observatory of stellar objects, and a source code analyser.

  1. Virtual Shaker Testing: Simulation Technology Improves Vibration Test Performance

    NASA Technical Reports Server (NTRS)

    Ricci, Stefano; Peeters, Bart; Fetter, Rebecca; Boland, Doug; Debille, Jan

    2008-01-01

    In the field of vibration testing, the interaction between the structure being tested and the instrumentation hardware used to perform the test is a critical issue. This is particularly true when testing massive structures (e.g. satellites), because due to physical design and manufacturing limits, the dynamics of the testing facility often couples with the test specimen one in the frequency range of interest. A further issue in this field is the standard use of a closed loop real-time vibration control scheme, which could potentially shift poles and change damping of the aforementioned coupled system. Virtual shaker testing is a novel approach to deal with these issues. It means performing a simulation which closely represents the real vibration test on the specific facility by taking into account all parameters which might impact the dynamic behavior of the specimen. In this paper, such a virtual shaker testing approach is developed. It consists of the following components: (1) Either a physical-based or an equation-based coupled electro-mechanical lumped parameter shaker model is created. The model parameters are obtained from manufacturer's specifications or by carrying out some dedicated experiments; (2) Existing real-time vibration control algorithm are ported to the virtual simulation environment; and (3) A structural model of the test object is created and after defining proper interface conditions structural modes are computed by means of the well-established Craig-Bampton CMS technique. At this stage, a virtual shaker test has been run, by coupling the three described models (shaker, control loop, structure) in a co-simulation routine. Numerical results have eventually been correlated with experimental ones in order to assess the robustness of the proposed methodology.

  2. A multilayer network dataset of interaction and influence spreading in a virtual world

    NASA Astrophysics Data System (ADS)

    Jankowski, Jarosław; Michalski, Radosław; Bródka, Piotr

    2017-10-01

    Presented data contains the record of five spreading campaigns that occurred in a virtual world platform. Users distributed avatars between each other during the campaigns. The processes varied in time and range and were either incentivized or not incentivized. Campaign data is accompanied by events. The data can be used to build a multilayer network to place the campaigns in a wider context. To the best of the authors' knowledge, the study is the first publicly available dataset containing a complete real multilayer social network together, along with five complete spreading processes in it.

  3. Awareware: Narrowcasting Attributes for Selective Attention, Privacy, and Multipresence

    NASA Astrophysics Data System (ADS)

    Cohen, Michael; Newton Fernando, Owen Noel

    The domain of cscw, computer-supported collaborative work, and DSC, distributed synchronous collaboration, spans real-time interactive multiuser systems, shared information spaces, and applications for teleexistence and artificial reality, including collaborative virtual environments ( cves) (Benford et al., 2001). As presence awareness systems emerge, it is important to develop appropriate interfaces and architectures for managing multimodal multiuser systems. Especially in consideration of the persistent connectivity enabled by affordable networked communication, shared distributed environments require generalized control of media streams, techniques to control source → sink transmissions in synchronous groupware, including teleconferences and chatspaces, online role-playing games, and virtual concerts.

  4. The Effects of Natural Locomotion on Maneuvering Task Performance in Virtual and Real Environments

    DTIC Science & Technology

    2001-09-01

    respect to its intrinsic properties (constraints on movements, caloric energy expenditure , and so on). • The components of the control action...Motion attributes: direction, extend, timing - Effort and caloric energy expenditure 2. Interaction...halls, stairs , and concealed in furniture • Do not attempt to deactivate them; mark for later disarming by trained engineers • Use previous ly

  5. The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties

    NASA Astrophysics Data System (ADS)

    Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña

    2008-08-01

    A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.

  6. The ALIVE Project: Astronomy Learning in Immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.; Denn, G.

    2008-06-01

    The Astronomy Learning in Immersive Virtual Environments (ALIVE) project seeks to discover learning modes and optimal teaching strategies using immersive virtual environments (VEs). VEs are computer-generated, three-dimensional environments that can be navigated to provide multiple perspectives. Immersive VEs provide the additional benefit of surrounding a viewer with the simulated reality. ALIVE evaluates the incorporation of an interactive, real-time ``virtual universe'' into formal college astronomy education. In the experiment, pre-course, post-course, and curriculum tests will be used to determine the efficacy of immersive visualizations presented in a digital planetarium versus the same visual simulations in the non-immersive setting of a normal classroom, as well as a control case using traditional classroom multimedia. To normalize for inter-instructor variability, each ALIVE instructor will teach at least one of each class in each of the three test groups.

  7. A Direct Comparison of Real-World and Virtual Navigation Performance in Chronic Stroke Patients.

    PubMed

    Claessen, Michiel H G; Visser-Meily, Johanna M A; de Rooij, Nicolien K; Postma, Albert; van der Ham, Ineke J M

    2016-04-01

    An increasing number of studies have presented evidence that various patient groups with acquired brain injury suffer from navigation problems in daily life. This skill is, however, scarcely addressed in current clinical neuropsychological practice and suitable diagnostic instruments are lacking. Real-world navigation tests are limited by geographical location and associated with practical constraints. It was, therefore, investigated whether virtual navigation might serve as a useful alternative. To investigate the convergent validity of virtual navigation testing, performance on the Virtual Tubingen test was compared to that on an analogous real-world navigation test in 68 chronic stroke patients. The same eight subtasks, addressing route and survey knowledge aspects, were assessed in both tests. In addition, navigation performance of stroke patients was compared to that of 44 healthy controls. A correlation analysis showed moderate overlap (r = .535) between composite scores of overall real-world and virtual navigation performance in stroke patients. Route knowledge composite scores correlated somewhat stronger (r = .523) than survey knowledge composite scores (r = .442). When comparing group performances, patients obtained lower scores than controls on seven subtasks. Whereas the real-world test was found to be easier than its virtual counterpart, no significant interaction-effects were found between group and environment. Given moderate overlap of the total scores between the two navigation tests, we conclude that virtual testing of navigation ability is a valid alternative to navigation tests that rely on real-world route exposure.

  8. Augmented reality on poster presentations, in the field and in the classroom

    NASA Astrophysics Data System (ADS)

    Hawemann, Friedrich; Kolawole, Folarin

    2017-04-01

    Augmented reality (AR) is the direct addition of virtual information through an interface to a real-world environment. In practice, through a mobile device such as a tablet or smartphone, information can be projected onto a target- for example, an image on a poster. Mobile devices are widely distributed today such that augmented reality is easily accessible to almost everyone. Numerous studies have shown that multi-dimensional visualization is essential for efficient perception of the spatial, temporal and geometrical configuration of geological structures and processes. Print media, such as posters and handouts lack the ability to display content in the third and fourth dimensions, which might be in space-domain as seen in three-dimensional (3-D) objects, or time-domain (four-dimensional, 4-D) expressible in the form of videos. Here, we show that augmented reality content can be complimentary to geoscience poster presentations, hands-on material and in the field. In the latter example, location based data is loaded and for example, a virtual geological profile can be draped over a real-world landscape. In object based AR, the application is trained to recognize an image or object through the camera of the user's mobile device, such that specific content is automatically downloaded and displayed on the screen of the device, and positioned relative to the trained image or object. We used ZapWorks, a commercially-available software application to create and present examples of content that is poster-based, in which important supplementary information is presented as interactive virtual images, videos and 3-D models. We suggest that the flexibility and real-time interactivity offered by AR makes it an invaluable tool for effective geoscience poster presentation, class-room and field geoscience learning.

  9. Entanglement entropy between real and virtual particles in ϕ4 quantum field theory

    NASA Astrophysics Data System (ADS)

    Ardenghi, Juan Sebastián

    2015-04-01

    The aim of this work is to compute the entanglement entropy of real and virtual particles by rewriting the generating functional of ϕ4 theory as a mean value between states and observables defined through the correlation functions. Then the von Neumann definition of entropy can be applied to these quantum states and in particular, for the partial traces taken over the internal or external degrees of freedom. This procedure can be done for each order in the perturbation expansion showing that the entanglement entropy for real and virtual particles behaves as ln (m0). In particular, entanglement entropy is computed at first order for the correlation function of two external points showing that mutual information is identical to the external entropy and that conditional entropies are negative for all the domain of m0. In turn, from the definition of the quantum states, it is possible to obtain general relations between total traces between different quantum states of a ϕr theory. Finally, discussion about the possibility of taking partial traces over external degrees of freedom is considered, which implies the introduction of some observables that measure space-time points where an interaction occurs.

  10. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  11. What Happens in a Virtual World Has a Real-World Impact, a Scholar Finds

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2008-01-01

    Forget the pills, hypnosis, and meditation. Losing weight or boosting self-confidence can be achieved by adopting an avatar and living in virtual reality, says Jeremy N. Bailenson, an assistant professor of communications at Stanford University. As the director of Stanford's Virtual Human Interaction Lab, Mr. Bailenson has explored ways that…

  12. An Analysis of Learners' Intentions toward Virtual Reality Learning Based on Constructivist and Technology Acceptance Approaches

    ERIC Educational Resources Information Center

    Huang, Hsiu-Mei; Liaw, Shu-Sheng

    2018-01-01

    Within a constructivist paradigm, the virtual reality technology focuses on the learner's actively interactive learning processes and attempts to reduce the gap between the learner's knowledge and a real-life experience. Recently, virtual reality technologies have been developed for a wide range of applications in education, but further research…

  13. Uniqueness of Experience and Virtual Playworlds: Playing Is Not Just for Fun

    ERIC Educational Resources Information Center

    Talamo, Alessandra; Pozzi, Simone; Mellini, Barbara

    2010-01-01

    Social interactions within virtual communities are often described solely as being online experiences. Such descriptions are limited, for they fail to reference life external to the screen. The terms "virtual" and "real" have a negative connotation for many people and can even be interpreted to mean that something is "false" or "inauthentic."…

  14. A Model Supported Interactive Virtual Environment for Natural Resource Sharing in Environmental Education

    ERIC Educational Resources Information Center

    Barbalios, N.; Ioannidou, I.; Tzionas, P.; Paraskeuopoulos, S.

    2013-01-01

    This paper introduces a realistic 3D model supported virtual environment for environmental education, that highlights the importance of water resource sharing by focusing on the tragedy of the commons dilemma. The proposed virtual environment entails simulations that are controlled by a multi-agent simulation model of a real ecosystem consisting…

  15. Collaborative virtual environments art exhibition

    NASA Astrophysics Data System (ADS)

    Dolinsky, Margaret; Anstey, Josephine; Pape, Dave E.; Aguilera, Julieta C.; Kostis, Helen-Nicole; Tsoupikova, Daria

    2005-03-01

    This panel presentation will exhibit artwork developed in CAVEs and discuss how art methodologies enhance the science of VR through collaboration, interaction and aesthetics. Artists and scientists work alongside one another to expand scientific research and artistic expression and are motivated by exhibiting collaborative virtual environments. Looking towards the arts, such as painting and sculpture, computer graphics captures a visual tradition. Virtual reality expands this tradition to not only what we face, but to what surrounds us and even what responds to our body and its gestures. Art making that once was isolated to the static frame and an optimal point of view is now out and about, in fully immersive mode within CAVEs. Art knowledge is a guide to how the aesthetics of 2D and 3D worlds affect, transform, and influence the social, intellectual and physical condition of the human body through attention to psychology, spiritual thinking, education, and cognition. The psychological interacts with the physical in the virtual in such a way that each facilitates, enhances and extends the other, culminating in a "go together" world. Attention to sharing art experience across high-speed networks introduces a dimension of liveliness and aliveness when we "become virtual" in real time with others.

  16. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  17. Modeling and computational simulation and the potential of virtual and augmented reality associated to the teaching of nanoscience and nanotechnology

    NASA Astrophysics Data System (ADS)

    Ribeiro, Allan; Santos, Helen

    With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.

  18. [Virtual reality in the treatment of mental disorders].

    PubMed

    Malbos, Eric; Boyer, Laurent; Lançon, Christophe

    2013-11-01

    Virtual reality is a media allowing users to interact in real time with computerized virtual environments. The application of this immersive technology to cognitive behavioral therapies is increasingly exploited for the treatment of mental disorders. The present study is a review of literature spanning from 1992 to 2012. It depicts the utility of this new tool for assessment and therapy through the various clinical studies carried out on subjects exhibiting diverse mental disorders. Most of the studies conducted on tested subjects attest to the significant efficacy of the Virtual Reality Exposure Therapy (VRET) for the treatment of distinct mental disorders. Comparative studies of VRET with the treatment of reference (the in vivo exposure component of the cognitive behavioral therapy) document an equal efficacy of the two methods and in some cases a superior therapeutic effect in favor of the VRET. Even though clinical experiments set on a larger scale, extended follow-up and studies about factors influencing presence are needed, virtual reality exposure represents an efficacious, confidential, affordable, flexible, interactive therapeutic method which application will progressively widened in the field of mental health. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  19. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  20. Home Exercise in a Social Context: Real-Time Experience Sharing Using Avatars

    NASA Astrophysics Data System (ADS)

    Aghajan, Yasmin; Lacroix, Joyca; Cui, Jingyu; van Halteren, Aart; Aghajan, Hamid

    This paper reports on the design of a vision-based exercise monitoring system. The system aims to promote well-being by making exercise sessions enjoyable experiences, either through real-time interaction and instructions proposed to the user, or via experience sharing or group gaming with peers in a virtual community. The use of avatars is explored as means of representation of the user’s exercise movements or appearance, and the system employs user-centric approaches in visual processing, behavior modeling via history data accumulation, and user feedback to learn the preferences. A preliminary survey study has been conducted to explore the avatar preferences in two user groups.

  1. Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures.

    PubMed

    Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I

    2002-06-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.

  2. Role of virtual reality for cerebral palsy management.

    PubMed

    Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy

    2014-08-01

    Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.

  3. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  4. Immersive participation: Smartphone-Apps and Virtual Reality - tools for knowledge transfer, citizen science and interactive collaboration

    NASA Astrophysics Data System (ADS)

    Dotterweich, Markus

    2017-04-01

    In the last few years, the use of smartphone-apps has become a daily routine in our life. However, only a few approaches have been undertaken to use apps for transferring scientific knowledge to the public audience. The development of learning apps or serious games requires large efforts and several levels of simplification which is different to traditional text books or learning webpages. Current approaches often lack a connection to the real life and/or innovative gamification concepts. Another almost untapped potential is the use of Virtual Reality, a fast growing technology which replicates a virtual environment in order to simulate physical experiences in artificial or real worlds. Hence, smartphone-apps and VR provides new opportunities for capacity building, knowledge transfer, citizen science or interactive engagement in the realm of environmental sciences. This presentation will show some examples and discuss the advantages of these immersive approaches to improve the knowledge transfer between scientists and citizens and to stimulate actions in the real world.

  5. Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction

    NASA Astrophysics Data System (ADS)

    Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.

    2018-05-01

    Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.

  6. Architecture and Key Techniques of Augmented Reality Maintenance Guiding System for Civil Aircrafts

    NASA Astrophysics Data System (ADS)

    hong, Zhou; Wenhua, Lu

    2017-01-01

    Augmented reality technology is introduced into the maintenance related field for strengthened information in real-world scenarios through integration of virtual assistant maintenance information with real-world scenarios. This can lower the difficulty of maintenance, reduce maintenance errors, and improve the maintenance efficiency and quality of civil aviation crews. Architecture of augmented reality virtual maintenance guiding system is proposed on the basis of introducing the definition of augmented reality and analyzing the characteristics of augmented reality virtual maintenance. Key techniques involved, such as standardization and organization of maintenance data, 3D registration, modeling of maintenance guidance information and virtual maintenance man-machine interaction, are elaborated emphatically, and solutions are given.

  7. An adaptable navigation strategy for Virtual Microscopy from mobile platforms.

    PubMed

    Corredor, Germán; Romero, Eduardo; Iregui, Marcela

    2015-04-01

    Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200 ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A Sidewalk Astronomy Experience in Second Life (R) for IYA2009

    NASA Astrophysics Data System (ADS)

    Gauthier, Adrienne J.; Huber, D.; I. New Media Task Group

    2009-01-01

    The NMTG has created an IYA 2009 presence in the 3-dimensional multi-user virtual world called Second Life (R), where residents (or avatars) interact with content built by others in dynamic, innovative, and social ways. The IYA2009 virtual real estate (called an island) will open in early January 2009 with an initial set of exhibits and interactives. Through 2009, additional exhibits, live talks, and webstreamed content will be added.Our Sidewalk Astronomy experience will be premiered for the island opening. We have designed the interactive to replicate a real-life small telescope experience. Visitors to our Second Life telescopes will first see an image of the object "as the eye sees" and will hear/read a narrative about the object, as one would experience in real life. The narratives have been carefully crafted to take the observer on a journey and not just hear straight facts about the object. Diving further into astronomical imagery, avatars will explore visible, infrared, X-ray, and radio views of the object (if available), all wrapped in contextual information that ties the multiwavelength views together. The content of the telescopes will update every month to be equivalent to mid-latitude 9pm sky views for the Northern Hemisphere, Southern Hemisphere pending. Supplemental materials will include World Wide Telescope tours and Google Sky layers. We are hoping to add live star party events throughout the year, using real life video feeds from amateur telescopes. Additionally, we will have links to the Sidewalk Astronomy IYA webpage so virtual residents can find real life star parties to attend. The Sidewalk Astronomy Second Life experience will also have a traveling version that can be placed in multiple locations (stores, events, parks) in order to bring astronomy to the virtual masses in a true Sidewalk Astronomy way.

  9. Stability effects of singularities in force-controlled robotic assist devices

    NASA Astrophysics Data System (ADS)

    Luecke, Greg R.

    2002-02-01

    Force feedback is being used as an interface between humans and material handling equipment to provide an intuitive method to control large and bulky payloads. Powered actuation in the lift assist device compensates for the inertial characteristics of the manipulator and the payload to provide effortless control and handling of manufacturing parts, components, and assemblies. The use of these Intelligent Assist Devices (IAD) is being explored to prevent worker injury, enhance material handling performance, and increase productivity in the workplace. The IAD also provides the capability to shape and control motion in the workspace during routine operations. Virtual barriers can be developed to protect fixed objects in the workspace, and regions can be programmed that attract the work piece to a certain position and orientation. However, the robot is still under complete control of the human operator, with the trajectory being determined and commanded using the judgment of the operator to complete a given task. In many cases, the IAD is built in a configuration that may have singular points inside the workspace. These singularities can cause problems when the unstructured trajectory commands from the human cause interaction between the IAD and the virtual walls and fixtures at positions close to these singularities. The research presented here explores the stability effects of the interactions between the powered manipulator and the virtual surfaces when controlled by the operator. Because of the flexible nature of the human decisions determining the real time work piece paths, manipulator singularities that occur in conjunction with the virtual surfaces raise stability issues in the performance around these singularities. We examine these stability issues in the context of a particular IAD configuration, and present analytic results for the performance and stability of these systems in response to the real-time trajectory modification of the human operator.

  10. The Real World and Virtual Worlds.

    ERIC Educational Resources Information Center

    Glaser, Stan

    1997-01-01

    Discusses some of the limitations of virtual reality (VR) with reference to socio-technical systems, i.e., the interaction of people with technology. Points to a significant opportunity for VR technology to be used in strategic partnership marketing and supply chain management. (Author/LRW)

  11. Virtual Campus Tours.

    ERIC Educational Resources Information Center

    Jarrell, Andrea

    1999-01-01

    College campus "tours" offered online have evolved to include 360-degree views, live video, animation, talking tour guides, interactive maps with photographic links, and detailed information about buildings, departments, and programs. Proponents feel they should enhance, not replace, real tours. The synergy between the virtual tour and…

  12. An Immersive VR System for Sports Education

    NASA Astrophysics Data System (ADS)

    Song, Peng; Xu, Shuhong; Fong, Wee Teck; Chin, Ching Ling; Chua, Gim Guan; Huang, Zhiyong

    The development of new technologies has undoubtedly promoted the advances of modern education, among which Virtual Reality (VR) technologies have made the education more visually accessible for students. However, classroom education has been the focus of VR applications whereas not much research has been done in promoting sports education using VR technologies. In this paper, an immersive VR system is designed and implemented to create a more intuitive and visual way of teaching tennis. A scalable system architecture is proposed in addition to the hardware setup layout, which can be used for various immersive interactive applications such as architecture walkthroughs, military training simulations, other sports game simulations, interactive theaters, and telepresent exhibitions. Realistic interaction experience is achieved through accurate and robust hybrid tracking technology, while the virtual human opponent is animated in real time using shader-based skin deformation. Potential future extensions are also discussed to improve the teaching/learning experience.

  13. Human Robotic Swarm Interaction Using an Artificial Physics Approach

    DTIC Science & Technology

    2014-12-01

    calculates virtual forces that are summed and translated into velocity commands. The virtual forces are modeled after real physical forces such as...results from the physical experiments show that an artificial physics-based framework is an effective way to allow multiple agents to follow a human... modeled after real physical forces such as gravitational and Coulomb, forces but are not restricted to them, for example, the force magnitude may not be

  14. When the display matters: A multifaceted perspective on 3D geovisualizations

    NASA Astrophysics Data System (ADS)

    Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří

    2017-04-01

    This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  15. Review of virtual reality treatment for mental health.

    PubMed

    Gourlay, D; Lun, K C; Liya, G

    2001-01-01

    This paper describes recent research that proposes virtual reality techniques as a therapy for patients with cognitive and psychological problems. Specifically this applies to victims of conditions such as traumatic brain injury, Alzheimers and Parkinsons. Additionally virtual reality therapy offers an alternative to current desensitization techniques for the treatment of phobias Some important issues are examined including means of user interaction, skills transfer to the real world, and side-effects of virtual reality exposure.

  16. Neurosurgery simulation using non-linear finite element modeling and haptic interaction

    NASA Astrophysics Data System (ADS)

    Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet

    2012-02-01

    Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.

  17. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  18. Effects of virtual reality training on functional reaching movements in people with Parkinson's disease: a randomized controlled pilot trial.

    PubMed

    Ma, Hui-Ing; Hwang, Wen-Juh; Fang, Jing-Jing; Kuo, Jui-Kun; Wang, Ching-Yi; Leong, Iat-Fai; Wang, Tsui-Ying

    2011-10-01

    To investigate whether practising reaching for virtual moving targets would improve motor performance in people with Parkinson's disease. Randomized pretest-posttest control group design. A virtual reality laboratory in a university setting. Thirty-three adults with Parkinson's disease. The virtual reality training required 60 trials of reaching for fast-moving virtual balls with the dominant hand. The control group had 60 practice trials turning pegs with their non-dominant hand. Pretest and posttest required reaching with the dominant hand to grasp real stationary balls and balls moving at different speeds down a ramp. Success rates and kinematic data (movement time, peak velocity and percentage of movement time for acceleration phase) from pretest and posttest were recorded to determine the immediate transfer effects. Compared with the control group, the virtual reality training group became faster (F = 9.08, P = 0.005) and more forceful (F = 9.36, P = 0.005) when reaching for real stationary balls. However, there was no significant difference in success rate or movement kinematics between the two groups when reaching for real moving balls. A short virtual reality training programme improved the movement speed of discrete aiming tasks when participants reached for real stationary objects. However, the transfer effect was minimal when reaching for real moving objects.

  19. A discrete mechanics framework for real time virtual surgical simulations with application to virtual laparoscopic nephrectomy.

    PubMed

    Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert

    2009-01-01

    The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.

  20. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills

    NASA Astrophysics Data System (ADS)

    Choi, Kup-Sze

    This paper presents a virtual reality based simulator prototype for learning phacoemulsification in cataract surgery, with focus on the skills required for making a cross-shape trench in cataractous lens by an ultrasound probe during the phaco-sculpting procedure. An immersive virtual environment is created with 3D models of the lens and surgical tools. Haptic device is also used as 3D user interface. Phaco-sculpting is simulated by interactively deleting the constituting tetrahedrons of the lens model. Collisions between the virtual probe and the lens are effectively identified by partitioning the space containing the lens hierarchically with an octree. The simulator can be programmed to collect real-time quantitative user data for reviewing and assessing trainee's performance in an objective manner. A game-based learning environment can be created on top of the simulator by incorporating gaming elements based on the quantifiable performance metrics.

  1. Virtual environments simulation in research reactor

    NASA Astrophysics Data System (ADS)

    Muhamad, Shalina Bt. Sheik; Bahrin, Muhammad Hannan Bin

    2017-01-01

    Virtual reality based simulations are interactive and engaging. It has the useful potential in improving safety training. Virtual reality technology can be used to train workers who are unfamiliar with the physical layout of an area. In this study, a simulation program based on the virtual environment at research reactor was developed. The platform used for virtual simulation is 3DVia software for which it's rendering capabilities, physics for movement and collision and interactive navigation features have been taken advantage of. A real research reactor was virtually modelled and simulated with the model of avatars adopted to simulate walking. Collision detection algorithms were developed for various parts of the 3D building and avatars to restrain the avatars to certain regions of the virtual environment. A user can control the avatar to move around inside the virtual environment. Thus, this work can assist in the training of personnel, as in evaluating the radiological safety of the research reactor facility.

  2. An interactive internet-based plate for assessing lunchtime food intake: a validation study on male employees.

    PubMed

    Svensson, Madeleine; Bellocco, Rino; Bakkman, Linda; Trolle Lagerros, Ylva

    2013-01-18

    Misreporting food intake is common because most health screenings rely on self-reports. The more accurate methods (eg, weighing food) are costly, time consuming, and impractical. We developed a new instrument for reporting food intake--an Internet-based interactive virtual food plate. The objective of this study was to validate this instrument's ability to assess lunch intake. Participants were asked to compose an ordinary lunch meal using both a virtual and a real lunch plate (with real food on a real plate). The participants ate their real lunch meals on-site. Before and after pictures of the composed lunch meals were taken. Both meals included identical food items. Participants were randomized to start with either instrument. The 2 instruments were compared using correlation and concordance measures (total energy intake, nutritional components, quantity of food, and participant characteristics). A total of 55 men (median age: 45 years, median body mass index [BMI]: 25.8 kg/m(2)) participated. We found an overall overestimation of reported median energy intake using the computer plate (3044 kJ, interquartile range [IQR] 1202 kJ) compared with the real lunch plate (2734 kJ, IQR 1051 kJ, P<.001). Spearman rank correlations and concordance correlations for energy intake and nutritional components ranged between 0.58 to 0.79 and 0.65 to 0.81, respectively. Although it slightly overestimated, our computer plate provides promising results in assessing lunch intake.

  3. An Interactive Internet-Based Plate for Assessing Lunchtime Food Intake: A Validation Study on Male Employees

    PubMed Central

    Bellocco, Rino; Bakkman, Linda; Trolle Lagerros, Ylva

    2013-01-01

    Background Misreporting food intake is common because most health screenings rely on self-reports. The more accurate methods (eg, weighing food) are costly, time consuming, and impractical. Objectives We developed a new instrument for reporting food intake—an Internet-based interactive virtual food plate. The objective of this study was to validate this instrument’s ability to assess lunch intake. Methods Participants were asked to compose an ordinary lunch meal using both a virtual and a real lunch plate (with real food on a real plate). The participants ate their real lunch meals on-site. Before and after pictures of the composed lunch meals were taken. Both meals included identical food items. Participants were randomized to start with either instrument. The 2 instruments were compared using correlation and concordance measures (total energy intake, nutritional components, quantity of food, and participant characteristics). Results A total of 55 men (median age: 45 years, median body mass index [BMI]: 25.8 kg/m2) participated. We found an overall overestimation of reported median energy intake using the computer plate (3044 kJ, interquartile range [IQR] 1202 kJ) compared with the real lunch plate (2734 kJ, IQR 1051 kJ, P<.001). Spearman rank correlations and concordance correlations for energy intake and nutritional components ranged between 0.58 to 0.79 and 0.65 to 0.81, respectively. Conclusion Although it slightly overestimated, our computer plate provides promising results in assessing lunch intake. PMID:23335728

  4. Virtual nursing grand rounds and shared governance: how innovation and empowerment are transforming nursing practice at Thanh Nhan Hospital, Hanoi, Vietnam.

    PubMed

    Crow, Gregory L; Nguyen, Thanh; DeBourgh, Gregory A

    2014-01-01

    The Vietnam Nurse Project has been operating in Hanoi since 2007. Its primary purpose is to improve nursing education through curriculum development, faculty development, and the introduction of a more student-centric teaching and learning environment. The Virtual Nursing Grand Rounds component of the project is an academic-practice partnership between the Vietnam Nurse Project at the University of San Francisco School of Nursing and Health Professions and the Thanh Nhan Hospital intensive care unit. Its goal is to improve nursing practice in the Thanh Nhan Hospital intensive care unit. The Virtual Nursing Grand Rounds is a fully interactive real-time synchronous computer technology-assisted point-to-point program that provides ongoing evidence-based staff development and consultative services.

  5. Social Gaming and Learning Applications: A Driving Force for the Future of Virtual and Augmented Reality?

    NASA Astrophysics Data System (ADS)

    Dörner, Ralf; Lok, Benjamin; Broll, Wolfgang

    Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.

  6. Training software using virtual-reality technology and pre-calculated effective dose data.

    PubMed

    Ding, Aiping; Zhang, Di; Xu, X George

    2009-05-01

    This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.

  7. An intelligent virtual human system for providing healthcare information and support.

    PubMed

    Rizzo, Albert A; Lange, Belinda; Buckwalter, John G; Forbell, Eric; Kim, Julia; Sagae, Kenji; Williams, Josh; Rothbaum, Barbara O; Difede, JoAnn; Reger, Greg; Parsons, Thomas; Kenny, Patrick

    2011-01-01

    Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality simulation technology for clinical purposes. Shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality with the "birth" of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive, artificially intelligent and natural language capable virtual human agents that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, virtual humans can be designed to perceive and act in a 3D virtual world, engage in spoken dialogues with real users and can be capable of exhibiting human-like emotional reactions. This paper will present an overview of the SimCoach project that aims to develop virtual human support agents to serve as online guides for promoting access to psychological healthcare information and for assisting military personnel and family members in breaking down barriers to initiating care. The SimCoach experience is being designed to attract and engage military Service Members, Veterans and their significant others who might not otherwise seek help with a live healthcare provider. It is expected that this experience will motivate users to take the first step--to empower themselves to seek advice and information regarding their healthcare and general personal welfare and encourage them to take the next step towards seeking more formal resources if needed.

  8. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  9. Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.

    PubMed

    Han, Dustin T; Suhail, Mohamed; Ragan, Eric D

    2018-04-01

    Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

  10. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  11. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  12. Crowd behaviour during high-stress evacuations in an immersive virtual environment

    PubMed Central

    Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W.; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-01-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. PMID:27605166

  13. Crowd behaviour during high-stress evacuations in an immersive virtual environment.

    PubMed

    Moussaïd, Mehdi; Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-09-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. © 2016 The Authors.

  14. Characteristics of a Virtual Community for Individuals Who Are d/Deaf and Hard of Hearing

    ERIC Educational Resources Information Center

    Shoham, Snunith; Heber, Meital

    2012-01-01

    The content of 2,050 messages on a virtual forum for d/Deaf and hard of hearing people in Israel was analyzed. Interactions and behavior were monitored to determine if behavior on the forum expressed social support, and whether the community was an entirely virtual community or a real community whose members also met in other venues. Subjects…

  15. Virtual Reality Enhanced Instructional Learning

    ERIC Educational Resources Information Center

    Nachimuthu, K.; Vijayakumari, G.

    2009-01-01

    Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…

  16. Haptic Technologies for MEMS Design

    NASA Astrophysics Data System (ADS)

    Calis, Mustafa; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents for the first time a design methodology for MEMS/NEMS based on haptic sensing technologies. The software tool created as a result of this methodology will enable designers to model and interact in real time with their virtual prototype. One of the main advantages of haptic sensing is the ability to bring unusual microscopic forces back to the designer's world. Other significant benefits for developing such a methodology include gain productivity and the capability to include manufacturing costs within the design cycle.

  17. [Virtual microscopy in pathology teaching and postgraduate training (continuing education)].

    PubMed

    Sinn, H P; Andrulis, M; Mogler, C; Schirmacher, P

    2008-11-01

    As with conventional microscopy, virtual microscopy permits histological tissue sections to be viewed on a computer screen with a free choice of viewing areas and a wide range of magnifications. This, combined with the possibility of linking virtual microscopy to E-Learning courses, make virtual microscopy an ideal tool for teaching and postgraduate training in pathology. Uses of virtual microscopy in pathology teaching include blended learning with the presentation of digital teaching slides in the internet parallel to presentation in the histology lab, extending student access to histology slides beyond the lab. Other uses are student self-learning in the Internet, as well as the presentation of virtual slides in the classroom with or without replacing real microscopes. Successful integration of virtual microscopy depends on its embedding in the virtual classroom and the creation of interactive E-learning content. Applications derived from this include the use of virtual microscopy in video clips, podcasts, SCORM modules and the presentation of virtual microscopy using interactive whiteboards in the classroom.

  18. Human Pacman: A Mobile Augmented Reality Entertainment System Based on Physical, Social, and Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Cheok, Adrian David

    This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.

  19. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    PubMed

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  20. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW

    PubMed Central

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-01-01

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578

  1. Achievement of Virtual and Real Objects Using a Short-Term Motor Learning Protocol in People with Duchenne Muscular Dystrophy: A Crossover Randomized Controlled Trial.

    PubMed

    Massetti, Thais; Fávero, Francis Meire; Menezes, Lilian Del Ciello de; Alvarez, Mayra Priscila Boscolo; Crocetta, Tânia Brusque; Guarnieri, Regiani; Nunes, Fátima L S; Monteiro, Carlos Bandeira de Mello; Silva, Talita Dias da

    2018-04-01

    To evaluate whether people with Duchenne muscular dystrophy (DMD) practicing a task in a virtual environment could improve performance given a similar task in a real environment, as well as distinguishing whether there is transference between performing the practice in virtual environment and then a real environment and vice versa. Twenty-two people with DMD were evaluated and divided into two groups. The goal was to reach out and touch a red cube. Group A began with the real task and had to touch a real object, and Group B began with the virtual task and had to reach a virtual object using the Kinect system. ANOVA showed that all participants decreased the movement time from the first (M = 973 ms) to the last block of acquisition (M = 783 ms) in both virtual and real tasks and motor learning could be inferred by the short-term retention and transfer task (with increasing distance of the target). However, the evaluation of task performance demonstrated that the virtual task provided an inferior performance when compared to the real task in all phases of the study, and there was no effect for sequence. Both virtual and real tasks promoted improvement of performance in the acquisition phase, short-term retention, and transfer. However, there was no transference of learning between environments. In conclusion, it is recommended that the use of virtual environments for individuals with DMD needs to be considered carefully.

  2. Real-time and interactive virtual Doppler ultrasound

    NASA Astrophysics Data System (ADS)

    Hirji, Samira; Downey, Donal B.; Holdsworth, David W.; Steinman, David A.

    2005-04-01

    This paper describes our "virtual" Doppler ultrasound (DUS) system, in which colour DUS (CDUS) images and DUS spectrograms are generated on-the-fly and displayed in real-time in response to position and orientation cues provided by a magnetically tracked handheld probe. As the presence of complex flow often confounds the interpretation of Doppler ultrasound data, this system will serve to be a fundamental tool for training sonographers and gaining insight into the relationship between ambiguous DUS images and complex blood flow dynamics. Recently, we demonstrated that DUS spectra could be realistically simulated in real-time, by coupling a semi-empirical model of the DUS physics to a 3-D computational fluid dynamics (CFD) model of a clinically relevant flow field. Our system is an evolution of this approach where a motion-tracking device is used to continuously update the origin and orientation of a slice passing through a CFD model of a stenosed carotid bifurcation. After calibrating our CFD model onto a physical representation of a human neck, virtual CDUS images from an instantaneous slice are then displayed at a rate of approximately 15 Hz by simulating, on-the-fly, an array of DUS spectra and colour coding the resulting spectral mean velocity using a traditional Doppler colour scale. Mimicking a clinical examination, the operator can freeze the CDUS image on-screen, and a spectrogram corresponding to the selected sample volume location is rendered at a higher frame rate of at least 30 Hz. All this is achieved using an inexpensive desktop workstation and commodity graphics card.

  3. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  4. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  5. Teaching Physics Using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Savage, C.; McGrath, D.; McIntyre, T.; Wegener, M.; Williamson, M.

    2010-07-01

    We present an investigation of game-like simulations for physics teaching. We report on the effectiveness of the interactive simulation "Real Time Relativity" for learning special relativity. We argue that the simulation not only enhances traditional learning, but also enables new types of learning that challenge the traditional curriculum. The lessons drawn from this work are being applied to the development of a simulation for enhancing the learning of quantum mechanics.

  6. Health informatics 3.0.

    PubMed

    Kalra, Dipak

    2011-01-01

    Web 3.0 promises us smart computer services that will interact with each other and leverage knowledge about us and our immediate context to deliver prioritised and relevant information to support decisions and actions. Healthcare must take advantage of such new knowledge-integrating services, in particular to support better co-operation between professionals of different disciplines working in different locations, and to enable well-informed co-operation between clinicians and patients. To grasp the potential of Web 3.0 we will need well-harmonised semantic resources that can richly connect virtual teams and link their strategies to real-time and tailored evidence. Facts, decision logic, care pathway steps, alerts, education need to be embedded within components that can interact with multiple EHR systems and services consistently. Using Health Informatics 3.0 a patient's current situation could be compared with the outcomes of very similar patients (from across millions) to deliver personalised care recommendations. The integration of EHRs with biomedical sciences ('omics) research results and predictive models such as the Virtual Physiological Human could help speed up the translation of new knowledge into clinical practice. The mission, and challenge, for Health Informatics 3.0 is to enable healthy citizens, patients and professionals to collaborate within a knowledge-empowered social network in which patient specific information and personalised real-time evidence are seamlessly interwoven.

  7. Individuals with severely impaired vision can learn useful orientation and mobility skills in virtual streets and can use them to improve real street safety.

    PubMed

    Bowman, Ellen Lambert; Liu, Lei

    2017-01-01

    Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say "GO" at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON'T WALK). Before training, > 50% of the GO calls from all participants fell in the DON'T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision.

  8. A virtual reality based simulator for learning nasogastric tube placement.

    PubMed

    Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong

    2015-02-01

    Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Body Image and Anti-Fat Attitudes: An Experimental Study Using a Haptic Virtual Reality Environment to Replicate Human Touch.

    PubMed

    Tremblay, Line; Roy-Vaillancourt, Mélina; Chebbi, Brahim; Bouchard, Stéphane; Daoust, Michael; Dénommée, Jessica; Thorpe, Moriah

    2016-02-01

    It is well documented that anti-fat attitudes influence the interactions individuals have with overweight people. However, testing attitudes through self-report measures is challenging. In the present study, we explore the use of a haptic virtual reality environment to physically interact with overweight virtual human (VH). We verify the hypothesis that duration and strength of virtual touch vary according to the characteristics of VH in ways similar to those encountered from interaction with real people in anti-fat attitude studies. A group of 61 participants were randomly assigned to one of the experimental conditions involving giving a virtual hug to a female or a male VH of either normal or overweight. We found significant associations between body image satisfaction and anti-fat attitudes and sex differences on these measures. We also found a significant interaction effect of the sex of the participants, sex of the VH, and the body size of the VH. Female participants hugged longer the overweight female VH than overweight male VH. Male participants hugged longer the normal-weight VH than the overweight VH. We conclude that virtual touch is a promising method of measuring attitudes, emotion and social interactions.

  10. Real behavior in virtual environments: psychology experiments in a simple virtual-reality paradigm using video games.

    PubMed

    Kozlov, Michail D; Johansen, Mark K

    2010-12-01

    The purpose of this research was to illustrate the broad usefulness of simple video-game-based virtual environments (VEs) for psychological research on real-world behavior. To this end, this research explored several high-level social phenomena in a simple, inexpensive computer-game environment: the reduced likelihood of helping under time pressure and the bystander effect, which is reduced helping in the presence of bystanders. In the first experiment, participants had to find the exit in a virtual labyrinth under either high or low time pressure. They encountered rooms with and without virtual bystanders, and in each room, a virtual person requested assistance. Participants helped significantly less frequently under time pressure but the presence/absence of a small number of bystanders did not significantly moderate helping. The second experiment increased the number of virtual bystanders, and participants were instructed to imagine that these were real people. Participants helped significantly less in rooms with large numbers of bystanders compared to rooms with no bystanders, thus demonstrating a bystander effect. These results indicate that even sophisticated high-level social behaviors can be observed and experimentally manipulated in simple VEs, thus implying the broad usefulness of this paradigm in psychological research as a good compromise between experimental control and ecological validity.

  11. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  12. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a learning curve for each student and each Virtual Welding Class'" can be plotted, for an instructor's review or a required third party evaluation.

  13. Functional performance comparison between real and virtual tasks in older adults

    PubMed Central

    Bezerra, Ítalla Maria Pinheiro; Crocetta, Tânia Brusque; Massetti, Thais; da Silva, Talita Dias; Guarnieri, Regiani; Meira, Cassio de Miranda; Arab, Claudia; de Abreu, Luiz Carlos; de Araujo, Luciano Vieira; Monteiro, Carlos Bandeira de Mello

    2018-01-01

    Abstract Introduction: Ageing is usually accompanied by deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity, making chronic diseases, and the well-being of older adults new challenges to global public health. Objective: The purpose of this study was to evaluate whether a task practiced in a virtual environment could promote better performance and enable transfer to the same task in a real environment. Method: The study evaluated 65 older adults of both genders, aged 60 to 82 years (M = 69.6, SD = 6.3). A timing coincident task was applied to measure the perceptual-motor ability to perform a motor response. The participants were divided into 2 groups: started in a real interface and started in a virtual interface. Results: All subjects improved their performance during the practice, but improvement was not observed for the real interface, as the participants were near maximum performance from the beginning of the task. However, there was no transfer of performance from the virtual to real environment or vice versa. Conclusions: The virtual environment was shown to provide improvement of performance with a short-term motor learning protocol in a timing coincident task. This result suggests that the practice of tasks in a virtual environment seems to be a promising tool for the assessment and training of healthy older adults, even though there was no transfer of performance to a real environment. Trial registration: ISRCTN02960165. Registered 8 November 2016. PMID:29369177

  14. A Proposed Framework for Collaborative Design in a Virtual Environment

    NASA Astrophysics Data System (ADS)

    Breland, Jason S.; Shiratuddin, Mohd Fairuz

    This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.

  15. Combination of optical shape measurement and augmented reality for task support: II. Real-time feedback of shape measurement results

    NASA Astrophysics Data System (ADS)

    Yamauchi, Makoto; Iwamoto, Kazuyo

    2010-05-01

    Line heating is a skilled task in shipbuilding to shape the outer plates of ship hulls. Real-time information on the deformation of the plates during the task would be helpful to workers performing this process. Therefore, we herein propose an interactive scheme for supporting workers performing line heating; the system provides such information through an optical shape measurement instrument combined with an augmented reality (AR) system. The instrument was designed and fabricated so that the measured data were represented using coordinates based on fiducial markers. Since the markers were simultaneously used in the AR system for the purpose of positioning, the data could then be displayed to the workers through a head-mounted display as a virtual image overlaid on the plates. Feedback of the shape measurement results was thus performed in real time using the proposed system.

  16. Conducting real-time multiplayer experiments on the web.

    PubMed

    Hawkins, Robert X D

    2015-12-01

    Group behavior experiments require potentially large numbers of participants to interact in real time with perfect information about one another. In this paper, we address the methodological challenge of developing and conducting such experiments on the web, thereby broadening access to online labor markets as well as allowing for participation through mobile devices. In particular, we combine a set of recent web development technologies, including Node.js with the Socket.io module, HTML5 canvas, and jQuery, to provide a secure platform for pedagogical demonstrations and scalable, unsupervised experiment administration. Template code is provided for an example real-time behavioral game theory experiment which automatically pairs participants into dyads and places them into a virtual world. In total, this treatment is intended to allow those with a background in non-web-based programming to modify the template, which handles the technical server-client networking details, for their own experiments.

  17. Emerging technologies in education and training: applications for the laboratory animal science community.

    PubMed

    Ketelhut, Diane Jass; Niemi, Steven M

    2007-01-01

    This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.

  18. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  19. An optical brain computer interface for environmental control.

    PubMed

    Ayaz, Hasan; Shewokis, Patricia A; Bunce, Scott; Onaral, Banu

    2011-01-01

    A brain computer interface (BCI) is a system that translates neurophysiological signals detected from the brain to supply input to a computer or to control a device. Volitional control of neural activity and its real-time detection through neuroimaging modalities are key constituents of BCI systems. The purpose of this study was to develop and test a new BCI design that utilizes intention-related cognitive activity within the dorsolateral prefrontal cortex using functional near infrared (fNIR) spectroscopy. fNIR is a noninvasive, safe, portable and affordable optical technique with which to monitor hemodynamic changes, in the brain's cerebral cortex. Because of its portability and ease of use, fNIR is amenable to deployment in ecologically valid natural working environments. We integrated a control paradigm in a computerized 3D virtual environment to augment interactivity. Ten healthy participants volunteered for a two day study in which they navigated a virtual environment with keyboard inputs, but were required to use the fNIR-BCI for interaction with virtual objects. Results showed that participants consistently utilized the fNIR-BCI with an overall success rate of 84% and volitionally increased their cerebral oxygenation level to trigger actions within the virtual environment.

  20. Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)

    2003-01-01

    A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.

  1. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markidis, S.; Rizwan, U.

    The use of virtual nuclear control room can be an effective and powerful tool for training personnel working in the nuclear power plants. Operators could experience and simulate the functioning of the plant, even in critical situations, without being in a real power plant or running any risk. 3D models can be exported to Virtual Reality formats and then displayed in the Virtual Reality environment providing an immersive 3D experience. However, two major limitations of this approach are that 3D models exhibit static textures, and they are not fully interactive and therefore cannot be used effectively in training personnel. Inmore » this paper we first describe a possible solution for embedding the output of a computer application in a 3D virtual scene, coupling real-world applications and VR systems. The VR system reported here grabs the output of an application running on an X server; creates a texture with the output and then displays it on a screen or a wall in the virtual reality environment. We then propose a simple model for providing interaction between the user in the VR system and the running simulator. This approach is based on the use of internet-based application that can be commanded by a laptop or tablet-pc added to the virtual environment. (authors)« less

  3. Two Impurities in a Bose-Einstein Condensate: From Yukawa to Efimov Attracted Polarons

    NASA Astrophysics Data System (ADS)

    Naidon, Pascal

    2018-04-01

    The well-known Yukawa and Efimov potentials are two different mediated interaction potentials. The first one arises in quantum field theory from the exchange of virtual particles. The second one is mediated by a real particle resonantly interacting with two other particles. This Letter shows how two impurities immersed in a Bose-Einstein condensate can exhibit both phenomena. For a weak attraction with the condensate, the two impurities form two polarons that interact through a weak Yukawa attraction mediated by virtual excitations. For a resonant attraction with the condensate, the exchanged excitation becomes a real boson and the mediated interaction changes to a strong Efimov attraction that can bind the two polarons. The resulting bipolarons turn into in-medium Efimov trimers made of the two impurities and one boson. Evidence of this physics could be seen in ultracold mixtures of atoms.

  4. Passive lighting responsive three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Hu, Juanmei

    2017-11-01

    A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.

  5. The James Webb Space Telescope RealWorld-InWorld Design Challenge: Involving Professionals in a Virtual Classroom

    NASA Astrophysics Data System (ADS)

    Masetti, Margaret; Bowers, S.

    2011-01-01

    Students around the country are becoming experts on the James Webb Space Telescope by designing solutions to two of the design challenges presented by this complex mission. RealWorld-InWorld has two parts; the first (the Real World portion) has high-school students working face to face in their classroom as engineers and scientists. The InWorld phase starts December 15, 2010 as interested teachers and their teams of high school students register to move their work into a 3D multi-user virtual world environment. At the start of this phase, college students from all over the country choose a registered team to lead InWorld. Each InWorld team is also assigned an engineer or scientist mentor. In this virtual world setting, each team refines their design solutions and creates a 3D model of the Webb telescope. InWorld teams will use 21st century tools to collaborate and build in the virtual world environment. Each team will learn, not only from their own team members, but will have the opportunity to interact with James Webb Space Telescope researchers through the virtual world setting, which allows for synchronous interactions. Halfway through the challenge, design solutions will be critiqued and a mystery problem will be introduced for each team. The top five teams will be invited to present their work during a synchronous Education Forum April 14, 2011. The top team will earn scholarships and technology. This is an excellent opportunity for professionals in both astronomy and associated engineering disciplines to become involved with a unique educational program. Besides the chance to mentor a group of interested students, there are many opportunities to interact with the students as a guest, via chats and presentations.

  6. Intercepting real and simulated falling objects: what is the difference?

    PubMed

    Baurès, Robin; Benguigui, Nicolas; Amorim, Michel-Ange; Hecht, Heiko

    2009-10-30

    The use of virtual reality is nowadays common in many studies in the field of human perception and movement control, particularly in interceptive actions. However, the ecological validity of the simulation is often taken for granted without having been formally established. If participants were to perceive the real situation and its virtual equivalent in a different fashion, the generalization of the results obtained in virtual reality to real life would be highly questionable. We tested the ecological validity of virtual reality in this context by comparing the timing of interceptive actions based upon actually falling objects and their simulated counterparts. The results show very limited differences as a function of whether participants were confronted with a real ball or a simulation thereof. And when present, such differences were limited to the first trial only. This result validates the use of virtual reality when studying interceptive actions of accelerated stimuli.

  7. Virtual simulation as a learning method in interventional radiology.

    PubMed

    Avramov, Predrag; Avramov, Milena; Juković, Mirela; Kadić, Vuk; Till, Viktor

    2013-01-01

    Radiology is the fastest growing discipline of medicine thanks to the implementation of new technologies and very rapid development of imaging diagnostic procedures in the last few decades. On the other hand, the development of imaging diagnostic procedures has put aside the traditional gaining of experience by working on real patients, and the need for other alternatives of learning interventional radiology procedures has emerged. A new method of virtual approach was added as an excellent alternative to the currently known methods of training on physical models and animals. Virtual reality represents a computer-generated reconstruction of anatomical environment with tactile interactions and it enables operators not only to learn on their own mistakes without compromising the patient's safety, but also to enhance their knowledge and experience. It is true that studies published so far on the validity of endovascular simulators have shown certain improvement of operator's technical skills and reduction in time needed for the procedure, but on the other hand, it is still a question whether these skills are transferable to the real patients in the angio room. With further improvement of technology, shortcomings of virtual approach to interventional procedures learning will be less significant and this procedure is likely to become the only method of learning in the near future.

  8. Virtual community centre for power wheelchair training: Experience of children and clinicians.

    PubMed

    Torkia, Caryne; Ryan, Stephen E; Reid, Denise; Boissy, Patrick; Lemay, Martin; Routhier, François; Contardo, Resi; Woodhouse, Janet; Archambault, Phillipe S

    2017-11-02

    To: 1) characterize the overall experience in using the McGill immersive wheelchair - community centre (miWe-CC) simulator; and 2) investigate the experience of presence (i.e., sense of being in the virtual rather than in the real, physical environment) while driving a PW in the miWe-CC. A qualitative research design with structured interviews was used. Fifteen clinicians and 11 children were interviewed after driving a power wheelchair (PW) in the miWe-CC simulator. Data were analyzed using the conventional and directed content analysis approaches. Overall, participants enjoyed using the simulator and experienced a sense of presence in the virtual space. They felt a sense of being in the virtual environment, involved and focused on driving the virtual PW rather than on the surroundings of the actual room where they were. Participants reported several similarities between the virtual community centre layout and activities of the miWe-CC and the day-to-day reality of paediatric PW users. The simulator replicated participants' expectations of real-life PW use and promises to have an effect on improving the driving skills of new PW users. Implications for rehabilitation Among young users, the McGill immersive wheelchair (miWe) simulator provides an experience of presence within the virtual environment. This experience of presence is generated by a sense of being in the virtual scene, a sense of being involved, engaged, and focused on interacting within the virtual environment, and by the perception that the virtual environment is consistent with the real world. The miWe is a relevant and accessible approach, complementary to real world power wheelchair training for young users.

  9. Extending body space in immersive virtual reality: a very long arm illusion.

    PubMed

    Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V; Slater, Mel

    2012-01-01

    Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.

  10. Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion

    PubMed Central

    Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel

    2012-01-01

    Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891

  11. Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)

    NASA Astrophysics Data System (ADS)

    Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.

    2006-12-01

    Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine geophysicists, and planetary scientists. The strength of our system is that it combines interactive rendering with interactive mapping and measurement of features observed in topographic and texture data. Comparison with commercially available software indicates that our system improves mapping accuracy and efficiency. More importantly, it enables Earth scientists to rapidly achieve a deeper level of understanding of remotely sensed data, as observations can be made that are not possible with existing systems.

  12. A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience

    ERIC Educational Resources Information Center

    Shih, Ya-Chun

    2015-01-01

    Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…

  13. Effects of magnification and visual accommodation on aimpoint estimation in simulated landings with real and virtual image displays

    NASA Technical Reports Server (NTRS)

    Randle, R. J.; Roscoe, S. N.; Petitt, J. C.

    1980-01-01

    Twenty professional pilots observed a computer-generated airport scene during simulated autopilot-coupled night landing approaches and at two points (20 sec and 10 sec before touchdown) judged whether the airplane would undershoot or overshoot the aimpoint. Visual accommodation was continuously measured using an automatic infrared optometer. Experimental variables included approach slope angle, display magnification, visual focus demand (using ophthalmic lenses), and presentation of the display as either a real (direct view) or a virtual (collimated) image. Aimpoint judgments shifted predictably with actual approach slope and display magnification. Both pilot judgments and measured accommodation interacted with focus demand with real-image displays but not with virtual-image displays. With either type of display, measured accommodation lagged far behind focus demand and was reliably less responsive to the virtual images. Pilot judgments shifted dramatically from an overwhelming perceived-overshoot bias 20 sec before touchdown to a reliable undershoot bias 10 sec later.

  14. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  15. A local active noise control system based on a virtual-microphone technique for railway sleeping vehicle applications

    NASA Astrophysics Data System (ADS)

    Diaz, J.; Egaña, J. M.; Viñolas, J.

    2006-11-01

    Low-frequency broadband noise generated on a railway vehicle by the wheel-rail interaction could be a big annoyance for passengers in sleeping cars. Low-frequency acoustic radiation is extremely difficult to attenuate by using passive devices. In this article, an active noise control (ANC) technique has been proposed for this purpose. A three-dimensional cabin was built in the laboratory to carry out the experiments. The proposed scheme is based on a Filtered-X Least Mean Square (FXLMS) control algorithm, particularised for a virtual-microphone technique. Control algorithms were designed with the Matlab-Simulink tool, and the Real Time Windows Target toolbox of Matlab was used to run in real time the ANC system. Referring to the results, different simulations and experimental performances were analysed to enlarge the silence zone around the passenger's ear zone and along the bed headboard. Attenuations of up to 20 and 15 dB(A) (re:20 μPa) were achieved at the passenger's ear in simulations and in experimental results, respectively.

  16. Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?

    PubMed Central

    Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.

    2010-01-01

    This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540

  17. Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.

    PubMed

    Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas

    2018-04-01

    Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Distributed interactive virtual environments for collaborative experiential learning and training independent of distance over Internet2.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P

    2004-01-01

    Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.

  19. Virtual reality for treatment compliance for people with serious mental illness.

    PubMed

    Välimäki, Maritta; Hätönen, Heli M; Lahti, Mari E; Kurki, Marjo; Hottinen, Anja; Metsäranta, Kiki; Riihimäki, Tanja; Adams, Clive E

    2014-10-08

    Virtual reality (VR) is computerised real-time technology, which can be used an alternative assessment and treatment tool in the mental health field. Virtual reality may take different forms to simulate real-life activities and support treatment. To investigate the effects of virtual reality to support treatment compliance in people with serious mental illness. We searched the Cochrane Schizophrenia Group Trials Register (most recent, 17th September 2013) and relevant reference lists. All relevant randomised studies comparing virtual reality with standard care for those with serious mental illnesses. We defined virtual reality as a computerised real-time technology using graphics, sound and other sensory input, which creates the interactive computer-mediated world as a therapeutic tool. All review authors independently selected studies and extracted data. For homogeneous dichotomous data the risk difference (RD) and the 95% confidence intervals (CI) were calculated on an intention-to-treat basis. For continuous data, we calculated mean differences (MD). We assessed risk of bias and created a 'Summary of findings' table using the GRADE approach. We identified three short-term trials (total of 156 participants, duration five to 12 weeks). Outcomes were prone to at least a moderate risk of overestimating positive effects. We found that virtual reality had little effects regarding compliance (3 RCTs, n = 156, RD loss to follow-up 0.02 CI -0.08 to 0.12, low quality evidence), cognitive functioning (1 RCT, n = 27, MD average score on Cognistat 4.67 CI -1.76 to 11.10, low quality evidence), social skills (1 RCT, n = 64, MD average score on social problem solving SPSI-R (Social Problem Solving Inventory - Revised) -2.30 CI -8.13 to 3.53, low quality evidence), or acceptability of intervention (2 RCTs, n = 92, RD 0.05 CI -0.09 to 0.19, low quality evidence). There were no data reported on mental state, insight, behaviour, quality of life, costs, service utilisation, or adverse effects. Satisfaction with treatment - measured using an un-referenced scale - and reported as "interest in training" was better for the virtual reality group (1 RCT, n = 64, MD 6.00 CI 1.39 to 10.61,low quality evidence). There is no clear good quality evidence for or against using virtual reality for treatment compliance among people with serious mental illness. If virtual reality is used, the experimental nature of the intervention should be clearly explained. High-quality studies should be undertaken in this area to explore any effects of this novel intervention and variations of approach.

  20. An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard

    2014-05-01

    In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are important features to support the planning of rover paths. In addition annotations can be placed directly into the 3D scene, which also serve as landmarks to aid navigation. The presented visualization and planning tool is a valuable asset for scientific analysis of planetary mission data. It complements traditional methods by giving access to an interactive virtual 3D reconstruction, which is realistically rendered. Representative examples and further information about the interactive 3D visualization tool can be found on the FP7-SPACE Project PRoViDE web page http://www.provide-space.eu/interactive-virtual-3d-tool/. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 'PRoViDE'.

  1. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  2. Vision-based augmented reality system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan

    2003-04-01

    The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.

  3. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  4. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    NASA Astrophysics Data System (ADS)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  5. The Socialization of Virtual Teams: Implications for ISD

    NASA Astrophysics Data System (ADS)

    Mullally, Brenda; Stapleton, Larry

    Studies show that Information Systems Development (ISD) projects do not fulfil stakeholder expectations of completion time, quality and budget. (2005) study shows that development is more about social interaction and mutual understanding than following a prescribed method. Systems development is a social process where interactions help to make sense of the reality within which the system is developed (Hirschheirn et al., 1991). Research concentrates on methodology when in fact method may not be the primary problem. Authors have called for further research to investigate the true nature of the current systems development environment in real organisational situations (Fitzgerald, 2000).

  6. Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java

    NASA Astrophysics Data System (ADS)

    Cao, Zaihui; hu, Zhongyan

    Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.

  7. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  8. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges

    PubMed Central

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414

  9. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges.

    PubMed

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants' behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants' height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother).

  10. Individuals with severely impaired vision can learn useful orientation and mobility skills in virtual streets and can use them to improve real street safety

    PubMed Central

    Liu, Lei

    2017-01-01

    Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say “GO” at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON’T WALK). Before training, > 50% of the GO calls from all participants fell in the DON’T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision. PMID:28445540

  11. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  12. Not virtual, but a real, live, online, interactive reference service.

    PubMed

    Jerant, Lisa Lott; Firestein, Kenneth

    2003-01-01

    In today's fast-paced environment, traditional medical reference services alone are not adequate to meet users' information needs. Efforts to find new ways to provide comprehensive service to users, where and when needed, have often included the use of new and developing technologies. This paper describes the experience of an academic health science library in developing and providing an online, real-time reference service. Issues discussed include selecting software, training librarians, staffing the service, and considering the future of the service. Use statistics, question type analysis, and feedback from users of the service and librarians who staff the service, are also presented.

  13. Control of vertical posture while elevating one foot to avoid a real or virtual obstacle.

    PubMed

    Ida, Hirofumi; Mohapatra, Sambit; Aruin, Alexander

    2017-06-01

    The purpose of this study is to investigate the control of vertical posture during obstacle avoidance in a real versus a virtual reality (VR) environment. Ten healthy participants stood upright and lifted one leg to avoid colliding with a real obstacle sliding on the floor toward a participant and with its virtual image. Virtual obstacles were delivered by a head mounted display (HMD) or a 3D projector. The acceleration of the foot, center of pressure, and electrical activity of the leg and trunk muscles were measured and analyzed during the time intervals typical for early postural adjustments (EPAs), anticipatory postural adjustments (APAs), and compensatory postural adjustments (CPAs). The results showed that the peak acceleration of foot elevation in the HMD condition decreased significantly when compared with that of the real and 3D projector conditions. Reduced activity of the leg and trunk muscles was seen when dealing with virtual obstacles (HMD and 3D projector) as compared with that seen when dealing with real obstacles. These effects were more pronounced during APAs and CPAs. The onsets of muscle activities in the supporting limb were seen during EPAs and APAs. The observed modulation of muscle activity and altered patterns of movement seen while avoiding a virtual obstacle should be considered when designing virtual rehabilitation protocols.

  14. Visualization and simulation techniques for surgical simulators using actual patient's data.

    PubMed

    Radetzky, Arne; Nürnberger, Andreas

    2002-11-01

    Because of the increasing complexity of surgical interventions research in surgical simulation became more and more important over the last years. However, the simulation of tissue deformation is still a challenging problem, mainly due to the short response times that are required for real-time interaction. The demands to hard and software are even larger if not only the modeled human anatomy is used but the anatomy of actual patients. This is required if the surgical simulator should be used as training medium for expert surgeons rather than students. In this article, suitable visualization and simulation methods for surgical simulation utilizing actual patient's datasets are described. Therefore, the advantages and disadvantages of direct and indirect volume rendering for the visualization are discussed and a neuro-fuzzy system is described, which can be used for the simulation of interactive tissue deformations. The neuro-fuzzy system makes it possible to define the deformation behavior based on a linguistic description of the tissue characteristics or to learn the dynamics by using measured data of real tissue. Furthermore, a simulator for minimally-invasive neurosurgical interventions is presented that utilizes the described visualization and simulation methods. The structure of the simulator is described in detail and the results of a system evaluation by an experienced neurosurgeon--a quantitative comparison between different methods of virtual endoscopy as well as a comparison between real brain images and virtual endoscopies--are given. The evaluation proved that the simulator provides a higher realism of the visualization and simulation then other currently available simulators. Copyright 2002 Elsevier Science B.V.

  15. Novel design of interactive multimodal biofeedback system for neurorehabilitation.

    PubMed

    Huang, He; Chen, Y; Xu, W; Sundaram, H; Olson, L; Ingalls, T; Rikakis, T; He, Jiping

    2006-01-01

    A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.

  16. Investigating Learners' Attitudes toward Virtual Reality Learning Environments: Based on a Constructivist Approach

    ERIC Educational Resources Information Center

    Huang, Hsiu-Mei; Rauch, Ulrich; Liaw, Shu-Sheng

    2010-01-01

    The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the…

  17. Virtual Games for Real Learning: Learning Online with Serious Fun.

    ERIC Educational Resources Information Center

    Jasinski, Marie; Thiagarajan, Sivasailam

    2000-01-01

    Focuses on the use of e-mail games for learning. Discusses terminology; reasons for using virtual games; promoting person-to-person interaction online; how to play an e-mail game, including three examples of specific games; player reactions; design components; the functions for facilitating an e-mail game; and the game as an excuse for debriefing.…

  18. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution

    PubMed Central

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks–walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience. PMID:26882473

  19. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution.

    PubMed

    Maidenbaum, Shachar; Buchs, Galit; Abboud, Sami; Lavi-Rotbain, Ori; Amedi, Amir

    2016-01-01

    Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.

  20. Constraint, Intelligence, and Control Hierarchy in Virtual Environments. Chapter 1

    NASA Technical Reports Server (NTRS)

    Sheridan, Thomas B.

    2007-01-01

    This paper seeks to deal directly with the question of what makes virtual actors and objects that are experienced in virtual environments seem real. (The term virtual reality, while more common in public usage, is an oxymoron; therefore virtual environment is the preferred term in this paper). Reality is difficult topic, treated for centuries in those sub-fields of philosophy called ontology- "of or relating to being or existence" and epistemology- "the study of the method and grounds of knowledge, especially with reference to its limits and validity" (both from Webster s, 1965). Advances in recent decades in the technologies of computers, sensors and graphics software have permitted human users to feel present or experience immersion in computer-generated virtual environments. This has motivated a keen interest in probing this phenomenon of presence and immersion not only philosophically but also psychologically and physiologically in terms of the parameters of the senses and sensory stimulation that correlate with the experience (Ellis, 1991). The pages of the journal Presence: Teleoperators and Virtual Environments have seen much discussion of what makes virtual environments seem real (see, e.g., Slater, 1999; Slater et al. 1994; Sheridan, 1992, 2000). Stephen Ellis, when organizing the meeting that motivated this paper, suggested to invited authors that "We may adopt as an organizing principle for the meeting that the genesis of apparently intelligent interaction arises from an upwelling of constraints determined by a hierarchy of lower levels of behavioral interaction. "My first reaction was "huh?" and my second was "yeah, that seems to make sense." Accordingly the paper seeks to explain from the author s viewpoint, why Ellis s hypothesis makes sense. What is the connection of "presence" or "immersion" of an observer in a virtual environment, to "constraints" and what types of constraints. What of "intelligent interaction," and is it the intelligence of the observer or the intelligence of the environment (whatever the latter may mean) that is salient? And finally, what might be relevant about "upwelling" of constraints as determined by a hierarchy of levels of interaction?

  1. Exploring the simulation requirements for virtual regional anesthesia training

    NASA Astrophysics Data System (ADS)

    Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.

    2010-01-01

    This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.

  2. Augmenting breath regulation using a mobile driven virtual reality therapy framework.

    PubMed

    Abushakra, Ahmad; Faezipour, Miad

    2014-05-01

    This paper presents a conceptual framework of a virtual reality therapy to assist individuals, especially lung cancer patients or those with breathing disorders to regulate their breath through real-time analysis of respiration movements using a smartphone. Virtual reality technology is an attractive means for medical simulations and treatment, particularly for patients with cancer. The theories, methodologies and approaches, and real-world dynamic contents for all the components of this virtual reality therapy (VRT) via a conceptual framework using the smartphone will be discussed. The architecture and technical aspects of the offshore platform of the virtual environment will also be presented.

  3. Development of an Interactive Augmented Environment and Its Application to Autonomous Learning for Quadruped Robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi

    This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.

  4. Do conversations with virtual avatars increase feelings of social anxiety?

    PubMed

    Powers, Mark B; Briceno, Nicole F; Gresham, Robert; Jouriles, Ernest N; Emmelkamp, Paul M G; Smits, Jasper A J

    2013-05-01

    Virtual reality (VR) technology provides a way to conduct exposure therapy with patients with social anxiety. However, the primary limitation of current technology is that the operator is limited to pre-programed avatars that cannot be controlled to interact/converse with the patient in real time. The current study piloted new technology allowing the operator to directly control the avatar (including speaking) during VR conversations. Using an incomplete repeated measures (VR vs. in vivo conversation) design and random starting order with rotation counterbalancing, participants (N = 26) provided ratings of fear and presence during both VR and in vivo conversations. Results showed that VR conversation successfully elevated fear ratings relative to baseline (d = 2.29). Participants also rated their fear higher during VR conversation than during in vivo conversation (d = 0.85). However, in vivo conversation was rated as more realistic than VR conversation (d = 0.74). No participants dropped out and 100% completed both VR and in vivo conversations. Qualitative participant comments suggested that the VR conversations would be more realistic if they did not meet the actor/operator and if they were not in the same room as the participant. Overall, the data suggest that the novel technology allowing real time interaction/conversation in VR may prove useful for the treatment of social anxiety in future studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  6. VEVI: A Virtual Reality Tool For Robotic Planetary Explorations

    NASA Technical Reports Server (NTRS)

    Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik

    1994-01-01

    The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.

  7. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  8. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  9. Virtual Interactive Presence in Global Surgical Education: International Collaboration Through Augmented Reality.

    PubMed

    Davis, Matthew Christopher; Can, Dang D; Pindrik, Jonathan; Rocque, Brandon G; Johnston, James M

    2016-02-01

    Technology allowing a remote, experienced surgeon to provide real-time guidance to local surgeons has great potential for training and capacity building in medical centers worldwide. Virtual interactive presence and augmented reality (VIPAR), an iPad-based tool, allows surgeons to provide long-distance, virtual assistance wherever a wireless internet connection is available. Local and remote surgeons view a composite image of video feeds at each station, allowing for intraoperative telecollaboration in real time. Local and remote stations were established in Ho Chi Minh City, Vietnam, and Birmingham, Alabama, as part of ongoing neurosurgical collaboration. Endoscopic third ventriculostomy with choroid plexus coagulation with VIPAR was used for subjective and objective evaluation of system performance. VIPAR allowed both surgeons to engage in complex visual and verbal communication during the procedure. Analysis of 5 video clips revealed video delay of 237 milliseconds (range, 93-391 milliseconds) relative to the audio signal. Excellent image resolution allowed the remote neurosurgeon to visualize all critical anatomy. The remote neurosurgeon could gesture to structures with no detectable difference in accuracy between stations, allowing for submillimeter precision. Fifteen endoscopic third ventriculostomy with choroid plexus coagulation procedures have been performed with the use of VIPAR between Vietnam and the United States, with no significant complications. 80% of these patients remain shunt-free. Evolving technologies that allow long-distance, intraoperative guidance, and knowledge transfer hold great potential for highly efficient international neurosurgical education. VIPAR is one example of an inexpensive, scalable platform for increasing global neurosurgical capacity. Efforts to create a network of Vietnamese neurosurgeons who use VIPAR for collaboration are underway. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The (human) science of medical virtual learning environments.

    PubMed

    Stone, Robert J

    2011-01-27

    The uptake of virtual simulation technologies in both military and civilian surgical contexts has been both slow and patchy. The failure of the virtual reality community in the 1990s and early 2000s to deliver affordable and accessible training systems stems not only from an obsessive quest to develop the 'ultimate' in so-called 'immersive' hardware solutions, from head-mounted displays to large-scale projection theatres, but also from a comprehensive lack of attention to the needs of the end users. While many still perceive the science of simulation to be defined by technological advances, such as computing power, specialized graphics hardware, advanced interactive controllers, displays and so on, the true science underpinning simulation--the science that helps to guarantee the transfer of skills from the simulated to the real--is that of human factors, a well-established discipline that focuses on the abilities and limitations of the end user when designing interactive systems, as opposed to the more commercially explicit components of technology. Based on three surgical simulation case studies, the importance of a human factors approach to the design of appropriate simulation content and interactive hardware for medical simulation is illustrated. The studies demonstrate that it is unnecessary to pursue real-world fidelity in all instances in order to achieve psychological fidelity--the degree to which the simulated tasks reproduce and foster knowledge, skills and behaviours that can be reliably transferred to real-world training applications.

  11. Mirror-image-induced magnetic modes.

    PubMed

    Xifré-Pérez, Elisabet; Shi, Lei; Tuzer, Umut; Fenollosa, Roberto; Ramiro-Manzano, Fernando; Quidant, Romain; Meseguer, Francisco

    2013-01-22

    Reflection in a mirror changes the handedness of the real world, and right-handed objects turn left-handed and vice versa (M. Gardner, The Ambidextrous Universe, Penguin Books, 1964). Also, we learn from electromagnetism textbooks that a flat metallic mirror transforms an electric charge into a virtual opposite charge. Consequently, the mirror image of a magnet is another parallel virtual magnet as the mirror image changes both the charge sign and the curl handedness. Here we report the dramatic modification in the optical response of a silicon nanocavity induced by the interaction with its image through a flat metallic mirror. The system of real and virtual dipoles can be interpreted as an effective magnetic dipole responsible for a strong enhancement of the cavity scattering cross section.

  12. Non-cancellation of electroweak logarithms in high-energy scattering

    DOE PAGES

    Manohar, Aneesh V.; Shotwell, Brian; Bauer, Christian W.; ...

    2015-01-01

    We study electroweak Sudakov corrections in high energy scattering, and the cancellation between real and virtual Sudakov corrections. Numerical results are given for the case of heavy quark production by gluon collisions involving the rates gg→t¯t, b¯b, t¯bW, t¯tZ, b¯bZ, t¯tH, b¯bH. Gauge boson virtual corrections are related to real transverse gauge boson emission, and Higgs virtual corrections to Higgs and longitudinal gauge boson emission. At the LHC, electroweak corrections become important in the TeV regime. At the proposed 100TeV collider, electroweak interactions enter a new regime, where the corrections are very large and need to be resummed.

  13. Virtual working systems to support R&D groups

    NASA Astrophysics Data System (ADS)

    Dew, Peter M.; Leigh, Christine; Drew, Richard S.; Morris, David; Curson, Jayne

    1995-03-01

    The paper reports on the progress at Leeds University to build a Virtual Science Park (VSP) to enhance the University's ability to interact with industry, grow its applied research and workplace learning activities. The VSP exploits the advances in real time collaborative computing and networking to provide an environment that meets the objectives of physically based science parks without the need for the organizations to relocate. It provides an integrated set of services (e.g. virtual consultancy, workbased learning) built around a structured person- centered information model. This model supports the integration of tools for: (a) navigating around the information space; (b) browsing information stored within the VSP database; (c) communicating through a variety of Person-to-Person collaborative tools; and (d) the ability to the information stored in the VSP including the relationships to other information that support the underlying model. The paper gives an overview of a generic virtual working system based on X.500 directory services and the World-Wide Web that can be used to support the Virtual Science Park. Finally the paper discusses some of the research issues that need to be addressed to fully realize a Virtual Science Park.

  14. Rehabilitation Program Integrating Virtual Environment to Improve Orientation and Mobility Skills for People Who Are Blind

    PubMed Central

    Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.

    2014-01-01

    This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. PMID:25284952

  15. Influence of real and virtual heights on standing balance.

    PubMed

    Cleworth, Taylor W; Horslen, Brian C; Carpenter, Mark G

    2012-06-01

    Fear and anxiety induced by threatening scenarios, such as standing on elevated surfaces, have been shown to influence postural control in young adults. There is also a need to understand how postural threat influences postural control in populations with balance deficits and risk of falls. However, safety and feasibility issues limit opportunities to place such populations in physically threatening scenarios. Virtual reality (VR) has successfully been used to simulate threatening environments, although it is unclear whether the same postural changes can be elicited by changes in virtual and real threat conditions. Therefore, the purpose of this study was to compare the effects of real and virtual heights on changes to standing postural control, electrodermal activity (EDA) and psycho-social state. Seventeen subjects stood at low and high heights in both real and virtual environments matched in scale and visual detail. A repeated measures ANOVA revealed increases with height, independent of visual environment, in EDA, anxiety, fear, and center of pressure (COP) frequency, and decreases with height in perceived stability, balance confidence and COP amplitude. Interaction effects were seen for fear and COP mean position; where real elicited larger changes with height than VR. This study demonstrates the utility of VR, as simulated heights resulted in changes to postural, autonomic and psycho-social measures similar to those seen at real heights. As a result, VR may be a useful tool for studying threat related changes in postural control in populations at risk of falls, and to screen and rehabilitate balance deficits associated with fear and anxiety. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The effect of fidelity: how expert behavior changes in a virtual reality environment.

    PubMed

    Ioannou, Ioanna; Avery, Alex; Zhou, Yun; Szudek, Jacek; Kennedy, Gregor; O'Leary, Stephen

    2014-09-01

    We compare the behavior of expert surgeons operating on the "gold standard" of simulation-the cadaveric temporal bone-against a high-fidelity virtual reality (VR) simulation. We aim to determine whether expert behavior changes within the virtual environment and to understand how the fidelity of simulation affects users' behavior. Five expert otologists performed cortical mastoidectomy and cochleostomy on a human cadaveric temporal bone and a VR temporal bone simulator. Hand movement and video recordings were used to derive a range of measures, to facilitate an analysis of surgical technique, and to compare expert behavior between the cadaveric and simulator environments. Drilling time was similar across the two environments. Some measures such as total time and burr change count differed predictably due to the ease of switching burrs within the simulator. Surgical strokes were generally longer in distance and duration in VR, but these measures changed proportionally to cadaveric measures across the stages of the procedure. Stroke shape metrics differed, which was attributed to the modeling of burr behavior within the simulator. This will be corrected in future versions. Slight differences in drill interaction between a virtual environment and the real world can have measurable effects on surgical technique, particularly in terms of stroke length, duration, and curvature. It is important to understand these effects when designing and implementing surgical training programs based on VR simulation--and when improving the fidelity of VR simulators to facilitate use of a similar technique in both real and simulated situations. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Problem-Based Learning Spanning Real and Virtual Words: A Case Study in Second Life

    ERIC Educational Resources Information Center

    Good, Judith; Howland, Katherine; Thackray, Liz

    2008-01-01

    There is a growing use of immersive virtual environments for educational purposes. However, much of this activity is not yet documented in the public domain, or is descriptive rather than analytical. This paper presents a case study in which university students were tasked with building an interactive learning experience using Second Life as a…

  18. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  19. Virtual reality, disability and rehabilitation.

    PubMed

    Wilson, P N; Foreman, N; Stanton, D

    1997-06-01

    Virtual reality, or virtual environment computer technology, generates simulated objects and events with which people can interact. Existing and potential applications for this technology in the field of disability and rehabilitation are discussed. The main benefits identified for disabled people are that they can engage in a range of activities in a simulator relatively free from the limitations imposed by their disability, and they can do so in safety. Evidence that the knowledge and skills acquired by disabled individuals in simulated environments can transfer to the real world is presented. In particular, spatial information and life skills learned in a virtual environment have been shown to transfer to the real world. Applications for visually impaired people are discussed, and the potential for medical interventions and the assessment and treatment of neurological damage are considered. Finally some current limitations of the technology, and ethical concerns in relation to disability, are discussed.

  20. Virtual Neurorobotics (VNR) to Accelerate Development of Plausible Neuromorphic Brain Architectures.

    PubMed

    Goodman, Philip H; Buntha, Sermsak; Zou, Quan; Dascalu, Sergiu-Mihai

    2007-01-01

    Traditional research in artificial intelligence and machine learning has viewed the brain as a specially adapted information-processing system. More recently the field of social robotics has been advanced to capture the important dynamics of human cognition and interaction. An overarching societal goal of this research is to incorporate the resultant knowledge about intelligence into technology for prosthetic, assistive, security, and decision support applications. However, despite many decades of investment in learning and classification systems, this paradigm has yet to yield truly "intelligent" systems. For this reason, many investigators are now attempting to incorporate more realistic neuromorphic properties into machine learning systems, encouraged by over two decades of neuroscience research that has provided parameters that characterize the brain's interdependent genomic, proteomic, metabolomic, anatomic, and electrophysiological networks. Given the complexity of neural systems, developing tenable models to capture the essence of natural intelligence for real-time application requires that we discriminate features underlying information processing and intrinsic motivation from those reflecting biological constraints (such as maintaining structural integrity and transporting metabolic products). We propose herein a conceptual framework and an iterative method of virtual neurorobotics (VNR) intended to rapidly forward-engineer and test progressively more complex putative neuromorphic brain prototypes for their ability to support intrinsically intelligent, intentional interaction with humans. The VNR system is based on the viewpoint that a truly intelligent system must be driven by emotion rather than programmed tasking, incorporating intrinsic motivation and intentionality. We report pilot results of a closed-loop, real-time interactive VNR system with a spiking neural brain, and provide a video demonstration as online supplemental material.

  1. Three-dimensional virtual bronchoscopy using a tablet computer to guide real-time transbronchial needle aspiration.

    PubMed

    Fiorelli, Alfonso; Raucci, Antonio; Cascone, Roberto; Reginelli, Alfonso; Di Natale, Davide; Santoriello, Carlo; Capuozzo, Antonio; Grassi, Roberto; Serra, Nicola; Polverino, Mario; Santini, Mario

    2017-04-01

    We proposed a new virtual bronchoscopy tool to improve the accuracy of traditional transbronchial needle aspiration for mediastinal staging. Chest-computed tomographic images (1 mm thickness) were reconstructed with Osirix software to produce a virtual bronchoscopic simulation. The target adenopathy was identified by measuring its distance from the carina on multiplanar reconstruction images. The static images were uploaded in iMovie Software, which produced a virtual bronchoscopic movie from the images; the movie was then transferred to a tablet computer to provide real-time guidance during a biopsy. To test the validity of our tool, we divided all consecutive patients undergoing transbronchial needle aspiration retrospectively in two groups based on whether the biopsy was guided by virtual bronchoscopy (virtual bronchoscopy group) or not (traditional group). The intergroup diagnostic yields were statistically compared. Our analysis included 53 patients in the traditional and 53 in the virtual bronchoscopy group. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy for the traditional group were 66.6%, 100%, 100%, 10.53% and 67.92%, respectively, and for the virtual bronchoscopy group were 84.31%, 100%, 100%, 20% and 84.91%, respectively. The sensitivity ( P  = 0.011) and diagnostic accuracy ( P  = 0.011) of sampling the paratracheal station were better for the virtual bronchoscopy group than for the traditional group; no significant differences were found for the subcarinal lymph node. Our tool is simple, economic and available in all centres. It guided in real time the needle insertion, thereby improving the accuracy of traditional transbronchial needle aspiration, especially when target lesions are located in a difficult site like the paratracheal station. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  2. Virtual Visualisation Laboratory for Science and Mathematics Content (Vlab-SMC) with Special Reference to Teaching and Learning of Chemistry

    NASA Astrophysics Data System (ADS)

    Badioze Zaman, Halimah; Bakar, Norashiken; Ahmad, Azlina; Sulaiman, Riza; Arshad, Haslina; Mohd. Yatim, Nor Faezah

    Research on the teaching of science and mathematics in schools and universities have shown that available teaching models are not effective in instilling the understanding of scientific and mathematics concepts, and the right scientific and mathematics skills required for learners to become good future scientists (mathematicians included). The extensive development of new technologies has a marked influence on education, by facilitating the design of new learning and teaching materials, that can improve the attitude of learners towards Science and Mathematics and the plausibility of advanced interactive, personalised learning process. The usefulness of the computer in Science and Mathematics education; as an interactive communication medium that permits access to all types of information (texts, images, different types of data such as sound, graphics and perhaps haptics like smell and touch); as an instrument for problem solving through simulations of scientific and mathematics phenomenon and experiments; as well as measuring and monitoring scientific laboratory experiments. This paper will highlight on the design and development of the virtual Visualisation Laboratory for Science & Mathematics Content (VLab-SMC) based on the Cognitivist- Constructivist-Contextual development life cycle model as well as the Instructional Design (ID) model, in order to achieve its objectives in teaching and learning. However, this paper with only highlight one of the virtual labs within VLab-SMC that is, the Virtual Lab for teaching Chemistry (VLab- Chem). The development life cycle involves the educational media to be used, measurement of content, and the authoring and programming involved; whilst the ID model involves the application of the cognitivist, constructivist and contextual theories in the modeling of the modules of VLab-SMC generally and Vlab-Chem specifically, using concepts such as 'learning by doing', contextual learning, experimental simulations 3D and real-time animations to create a virtual laboratory based on a real laboratory. Initial preliminary study shows positive indicators of VLab-Chem for the teaching and learning of Chemistry on the topic of 'Salts and Acids'.

  3. Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.

    PubMed

    Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz

    2012-01-01

    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. New technique for simulation of microgravity and variable gravity conditions

    NASA Astrophysics Data System (ADS)

    de la Rosa, R.; Alonso, A.; Abasolo, D. E.; Hornero, R.; Abasolo, D. E.

    2005-08-01

    This paper suggests a microgravity or variable gravity conditions simulator based on a Neuromuscular Control System (NCS), working as a man-machine interface. The subject under training lies on an active platform that counteracts his weight. And a Virtual Reality (VR) system displays a simulated environment, where the subject can interact a number of settings: extravehicular activity (EVA), walking on the Moon or training the limb response faced with variable acceleration scenes. Results related to real-time voluntary control have been achieved with neuromuscular interfaces at the Bioengineering Group in the University of Valladolid. It has been employed a custom real-time system to train arm movements. This paper outlines a more complex design that can complement other training facilities, like the buoyancy pool, in the task of microgravity simulation.

  5. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  6. Virtual reality in the operating room of the future.

    PubMed

    Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R

    1997-01-01

    In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.

  7. Virtual blood bank

    PubMed Central

    Wong, Kit Fai

    2011-01-01

    Virtual blood bank is the computer-controlled, electronically linked information management system that allows online ordering and real-time, remote delivery of blood for transfusion. It connects the site of testing to the point of care at a remote site in a real-time fashion with networked computers thus maintaining the integrity of immunohematology test results. It has taken the advantages of information and communication technologies to ensure the accuracy of patient, specimen and blood component identification and to enhance personnel traceability and system security. The built-in logics and process constraints in the design of the virtual blood bank can guide the selection of appropriate blood and minimize transfusion risk. The quality of blood inventory is ascertained and monitored, and an audit trail for critical procedures in the transfusion process is provided by the paperless system. Thus, the virtual blood bank can help ensure that the right patient receives the right amount of the right blood component at the right time. PMID:21383930

  8. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  9. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  10. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  11. Ames Lab 101: osgBullet

    ScienceCinema

    McCorkle, Doug

    2017-12-27

    Ames Laboratory scientist Doug McCorkle explains osgBullet, a 3-D virtual simulation software, and how it helps engineers design complex products and systems in a realistic, real-time virtual environment.

  12. Is “morphodynamic equilibrium” an oxymoron?

    USGS Publications Warehouse

    Zhou, Zeng; Coco, Giovanni; Townend, Ian; Olabarrieta, Maitane; van der Wegen, Mick; Gong, Zheng; D'Alpaos, Andrea; Gao, Shu; Jaffe, Bruce E.; Gelfenbaum, Guy R.; He, Qing; Wang, Yaping; Lanzoni, Stefano; Wang, Zhengbing; Winterwerp, Han; Zhang, Changkuan

    2017-01-01

    Morphodynamic equilibrium is a widely adopted yet elusive concept in the field of geomorphology of coasts, rivers and estuaries. Based on the Exner equation, an expression of mass conservation of sediment, we distinguish three types of equilibrium defined as static and dynamic, of which two different types exist. Other expressions such as statistical and quasi-equilibrium which do not strictly satisfy the Exner conditions are also acknowledged for their practical use. The choice of a temporal scale is imperative to analyse the type of equilibrium. We discuss the difference between morphodynamic equilibrium in the “real world” (nature) and the “virtual world” (model). Modelling studies rely on simplifications of the real world and lead to understanding of process interactions. A variety of factors affect the use of virtual-world predictions in the real world (e.g., variability in environmental drivers and variability in the setting) so that the concept of morphodynamic equilibrium should be mathematically unequivocal in the virtual world and interpreted over the appropriate spatial and temporal scale in the real world. We draw examples from estuarine settings which are subject to various governing factors which broadly include hydrodynamics, sedimentology and landscape setting. Following the traditional “tide-wave-river” ternary diagram, we summarize studies to date that explore the “virtual world”, discuss the type of equilibrium reached and how it relates to the real world.

  13. "Eyes On The Solar System": A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    NASA Astrophysics Data System (ADS)

    Hussey, K. J.

    2011-10-01

    NASA's Jet Propulsion Laboratory is using videogame technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that runs inside a Web browser, was released worldwide late last year (solarsystem.nasa.gov/eyes). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft and NASA/ESA missions in action. Key scientific results illustrated with video presentations and supporting imagery are imbedded contextually into the solar system. The presentation will include a detailed demonstration of the software along with a description/discussion of how this technology can be adapted for education and public outreach, as well as a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," which can be viewed at climate.nasa.gov/Eyes.html.

  14. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments.

    PubMed

    Oh, Soo Youn; Bailenson, Jeremy; Krämer, Nicole; Li, Benjamin

    2016-01-01

    Previous studies demonstrated the positive effects of smiling on interpersonal outcomes. The present research examined if enhancing one's smile in a virtual environment could lead to a more positive communication experience. In the current study, participants' facial expressions were tracked and mapped on a digital avatar during a real-time dyadic conversation. The avatar's smile was rendered such that it was either a slightly enhanced version or a veridical version of the participant's actual smile. Linguistic analyses using the Linguistic Inquiry Word Count (LIWC) revealed that participants who communicated with each other via avatars that exhibited enhanced smiles used more positive words to describe their interaction experience compared to those who communicated via avatars that displayed smiling behavior reflecting the participants' actual smiles. In addition, self-report measures showed that participants in the 'enhanced smile' condition felt more positive affect after the conversation and experienced stronger social presence compared to the 'normal smile' condition. These results are particularly striking when considering the fact that most participants (>90%) were unable to detect the smiling manipulation. This is the first study to demonstrate the positive effects of transforming unacquainted individuals' actual smiling behavior during a real-time avatar-networked conversation.

  15. Virtual Acoustics: Evaluation of Psychoacoustic Parameters

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.

  16. 2D and 3D Traveling Salesman Problem

    ERIC Educational Resources Information Center

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  17. A temporal bone surgery simulator with real-time feedback for surgical training.

    PubMed

    Wijewickrema, Sudanthi; Ioannou, Ioanna; Zhou, Yun; Piromchai, Patorn; Bailey, James; Kennedy, Gregor; O'Leary, Stephen

    2014-01-01

    Timely feedback on surgical technique is an important aspect of surgical skill training in any learning environment, be it virtual or otherwise. Feedback on technique should be provided in real-time to allow trainees to recognize and amend their errors as they occur. Expert surgeons have typically carried out this task, but they have limited time available to spend with trainees. Virtual reality surgical simulators offer effective, repeatable training at relatively low cost, but their benefits may not be fully realized while they still require the presence of experts to provide feedback. We attempt to overcome this limitation by introducing a real-time feedback system for surgical technique within a temporal bone surgical simulator. Our evaluation study shows that this feedback system performs exceptionally well with respect to accuracy and effectiveness.

  18. A new framework for interactive quality assessment with application to light field coding

    NASA Astrophysics Data System (ADS)

    Viola, Irene; Ebrahimi, Touradj

    2017-09-01

    In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.

  19. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    PubMed

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  20. Shoulder Kinematics and Spatial Pattern of Trapezius Electromyographic Activity in Real and Virtual Environments

    PubMed Central

    Samani, Afshin; Pontonnier, Charles; Dumont, Georges; Madeleine, Pascal

    2015-01-01

    The design of an industrial workstation tends to include ergonomic assessment steps based on a digital mock-up and a virtual reality setup. Lack of interaction and system fidelity is often reported as a main issue in such virtual reality applications. This limitation is a crucial issue as thorough ergonomic analysis is required for an investigation of the biomechanics. In the current study, we investigated the biomechanical responses of the shoulder joint in a simulated assembly task for comparison with the biomechanical responses in virtual environments. Sixteen male healthy novice subjects performed the task on three different platforms: real (RE), virtual (VE), and virtual environment with force feedback (VEF) with low and high precision demands. The subjects repeated the task 12 times (i.e., 12 cycles). High density electromyography from the upper trapezius and rotation angles of the shoulder joint were recorded and split into the cycles. The angular trajectories and velocity profiles of the shoulder joint angles over a cycle were computed in 3D. The inter-subject similarity in terms of normalized mutual information on kinematics and electromyography was investigated. Compared with RE the task in VE and VEF was characterized by lower kinematic maxima. The inter-subject similarity in RE compared with intra-subject similarity across the platforms was lower in terms of movement trajectories and greater in terms of trapezius muscle activation. The precision demand resulted in lower inter- and intra-subject similarity across platforms. The proposed approach identifies biomechanical differences in the shoulder joint in both VE and VEF compared with the RE platform, but these differences are less marked in VE mostly due to technical limitations of co-localizing the force feedback system in the VEF platform. PMID:25768123

  1. Evaluating Multiple Levels of an Interaction Fidelity Continuum on Performance and Learning in Near-Field Training Simulations.

    PubMed

    Bhargava, Ayush; Bertrand, Jeffrey W; Gramopadhye, Anand K; Madathil, Kapil C; Babu, Sabarish V

    2018-04-01

    With costs of head-mounted displays (HMDs) and tracking technology decreasing rapidly, various virtual reality applications are being widely adopted for education and training. Hardware advancements have enabled replication of real-world interactions in virtual environments to a large extent, paving the way for commercial grade applications that provide a safe and risk-free training environment at a fraction of the cost. But this also mandates the need to develop more intrinsic interaction techniques and to empirically evaluate them in a more comprehensive manner. Although there exists a body of previous research that examines the benefits of selected levels of interaction fidelity on performance, few studies have investigated the constituent components of fidelity in a Interaction Fidelity Continuum (IFC) with several system instances and their respective effects on performance and learning in the context of a real-world skills training application. Our work describes a large between-subjects investigation conducted over several years that utilizes bimanual interaction metaphors at six discrete levels of interaction fidelity to teach basic precision metrology concepts in a near-field spatial interaction task in VR. A combined analysis performed on the data compares and contrasts the six different conditions and their overall effects on performance and learning outcomes, eliciting patterns in the results between the discrete application points on the IFC. With respect to some performance variables, results indicate that simpler restrictive interaction metaphors and highest fidelity metaphors perform better than medium fidelity interaction metaphors. In light of these results, a set of general guidelines are created for developers of spatial interaction metaphors in immersive virtual environments for precise fine-motor skills training simulations.

  2. How incorporation of scents could enhance immersive virtual experiences

    PubMed Central

    Ischer, Matthieu; Baron, Naëm; Mermoud, Christophe; Cayeux, Isabelle; Porcherot, Christelle; Sander, David; Delplanque, Sylvain

    2014-01-01

    Under normal everyday conditions, senses all work together to create experiences that fill a typical person's life. Unfortunately for behavioral and cognitive researchers who investigate such experiences, standard laboratory tests are usually conducted in a nondescript room in front of a computer screen. They are very far from replicating the complexity of real world experiences. Recently, immersive virtual reality (IVR) environments became promising methods to immerse people into an almost real environment that involves more senses. IVR environments provide many similarities to the complexity of the real world and at the same time allow experimenters to constrain experimental parameters to obtain empirical data. This can eventually lead to better treatment options and/or new mechanistic hypotheses. The idea that increasing sensory modalities improve the realism of IVR environments has been empirically supported, but the senses used did not usually include olfaction. In this technology report, we will present an odor delivery system applied to a state-of-the-art IVR technology. The platform provides a three-dimensional, immersive, and fully interactive visualization environment called “Brain and Behavioral Laboratory—Immersive System” (BBL-IS). The solution we propose can reliably deliver various complex scents during different virtual scenarios, at a precise time and space and without contamination of the environment. The main features of this platform are: (i) the limited cross-contamination between odorant streams with a fast odor delivery (< 500 ms), (ii) the ease of use and control, and (iii) the possibility to synchronize the delivery of the odorant with pictures, videos or sounds. How this unique technology could be used to investigate typical research questions in olfaction (e.g., emotional elicitation, memory encoding or attentional capture by scents) will also be addressed. PMID:25101017

  3. Productive confusions: learning from simulations of pandemic virus outbreaks in Second Life

    NASA Astrophysics Data System (ADS)

    Cárdenas, Micha; Greci, Laura S.; Hurst, Samantha; Garman, Karen; Hoffman, Helene; Huang, Ricky; Gates, Michael; Kho, Kristen; Mehrmand, Elle; Porteous, Todd; Calvitti, Alan; Higginbotham, Erin; Agha, Zia

    2011-03-01

    Users of immersive virtual reality environments have reported a wide variety of side and after effects including the confusion of characteristics of the real and virtual worlds. Perhaps this side effect of confusing the virtual and real can be turned around to explore the possibilities for immersion with minimal technological support in virtual world group training simulations. This paper will describe observations from my time working as an artist/researcher with the UCSD School of Medicine (SoM) and Veterans Administration San Diego Healthcare System (VASDHS) to develop trainings for nurses, doctors and Hospital Incident Command staff that simulate pandemic virus outbreaks. By examining moments of slippage between realities, both into and out of the virtual environment, moments of the confusion of boundaries between real and virtual, we can better understand methods for creating immersion. I will use the mixing of realities as a transversal line of inquiry, borrowing from virtual reality studies, game studies, and anthropological studies to better understand the mechanisms of immersion in virtual worlds. Focusing on drills conducted in Second Life, I will examine moments of training to learn the software interface, moments within the drill and interviews after the drill.

  4. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  5. Use of 3D techniques for virtual production

    NASA Astrophysics Data System (ADS)

    Grau, Oliver; Price, Marc C.; Thomas, Graham A.

    2000-12-01

    Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.

  6. Satisfaction and Experience With a Supervised Home-Based Real-Time Videoconferencing Telerehabilitation Exercise Program in People with Chronic Obstructive Pulmonary Disease (COPD)

    PubMed Central

    TSAI, LING LING Y.; MCNAMARA, RENAE J.; DENNIS, SARAH M.; MODDEL, CHLOE; ALISON, JENNIFER A.; MCKENZIE, DAVID K.; MCKEOUGH, ZOE J.

    2016-01-01

    Telerehabilitation, consisting of supervised home-based exercise training via real-time videoconferencing, is an alternative method to deliver pulmonary rehabilitation with potential to improve access. The aims were to determine the level of satisfaction and experience of an eight-week supervised home-based telerehabilitation exercise program using real-time videoconferencing in people with COPD. Quantitative measures were the Client Satisfaction Questionnaire-8 (CSQ-8) and a purpose-designed satisfaction survey. A qualitative component was conducted using semi-structured interviews. Nineteen participants (mean (SD) age 73 (8) years, forced expiratory volume in 1 second (FEV1) 60 (23) % predicted) showed a high level of satisfaction in the CSQ-8 score and 100% of participants reported a high level of satisfaction with the quality of exercise sessions delivered using real-time videoconferencing in participant satisfaction survey. Eleven participants undertook semi-structured interviews. Key themes in four areas relating to the telerehabilitation service emerged: positive virtual interaction through technology; health benefits; and satisfaction with the convenience and use of equipment. Participants were highly satisfied with the telerehabilitation exercise program delivered via videoconferencing. PMID:28775799

  7. MARTI: man-machine animation real-time interface

    NASA Astrophysics Data System (ADS)

    Jones, Christian M.; Dlay, Satnam S.

    1997-05-01

    The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.

  8. Evaluating the Usability of Pinchigator, a system for Navigating Virtual Worlds using Pinch Gloves

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Brookman, Stephen; Dumas, Joseph D. II; Tilghman, Neal

    2003-01-01

    Appropriate design of two dimensional user interfaces (2D U/I) utilizing the well known WIMP (Window, Icon, Menu, Pointing device) environment for computer software is well studied and guidance can be found in several standards. Three-dimensional U/I design is not nearly so mature as 2D U/I, and standards bodies have not reached consensus on what makes a usable interface. This is especially true when the tools for interacting with the virtual environment may include stereo viewing, real time trackers and pinch gloves instead of just a mouse & keyboard. Over the last several years the authors have created a 3D U/I system dubbed Pinchigator for navigating virtual worlds based on the dVise dV/Mockup visualization software, Fakespace Pinch Gloves and Pohlemus trackers. The current work is to test the usability of the system on several virtual worlds, suggest improvements to increase Pinchigator s usability, and then to generalize about what was learned and how those lessons might be applied to improve other 3D U/I systems.

  9. An Optimized Trajectory Planning for Welding Robot

    NASA Astrophysics Data System (ADS)

    Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao

    2018-03-01

    In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.

  10. A model for flexible tools used in minimally invasive medical virtual environments.

    PubMed

    Soler, Francisco; Luzon, M Victoria; Pop, Serban R; Hughes, Chris J; John, Nigel W; Torres, Juan Carlos

    2011-01-01

    Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.

  11. Random walks on activity-driven networks with attractiveness

    NASA Astrophysics Data System (ADS)

    Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola

    2017-05-01

    Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.

  12. Real-time global illumination on mobile device

    NASA Astrophysics Data System (ADS)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  13. Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale

    PubMed Central

    Normand, Jean-Marie; Sanchez-Vives, Maria V.; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A.; Slater, Mel

    2012-01-01

    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale. PMID:23118987

  14. Beaming into the rat world: enabling real-time interaction between rat and human each at their own scale.

    PubMed

    Normand, Jean-Marie; Sanchez-Vives, Maria V; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A; Slater, Mel

    2012-01-01

    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.

  15. i3Drive, a 3D interactive driving simulator.

    PubMed

    Ambroz, Miha; Prebil, Ivan

    2010-01-01

    i3Drive, a wheeled-vehicle simulator, can accurately simulate vehicles of various configurations with up to eight wheels in real time on a desktop PC. It presents the vehicle dynamics as an interactive animation in a virtual 3D environment. The application is fully GUI-controlled, giving users an easy overview of the simulation parameters and letting them adjust those parameters interactively. It models all relevant vehicle systems, including the mechanical models of the suspension, power train, and braking and steering systems. The simulation results generally correspond well with actual measurements, making the system useful for studying vehicle performance in various driving scenarios. i3Drive is thus a worthy complement to other, more complex tools for vehicle-dynamics simulation and analysis.

  16. A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.

    PubMed

    Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis

    2018-03-01

    Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.

  17. The Whole World In Your Hands: Using an Interactive Virtual Reality Sandbox for Geospatial Education and Outreach

    NASA Astrophysics Data System (ADS)

    Clucas, T.; Wirth, G. S.; Broderson, D.

    2014-12-01

    Traditional geospatial education tools such as maps and computer screens don't convey the rich topography present on Earth. Translating lines on a contour lines on a topo map to relief in a landscape can be a challenging concept to convey.A partnership between Alaska EPSCoR and the Geographic Information Network of Alaska has successfully constructed an Interactive Virtual Reality Sandbox, an education tool that in real-time projects and updates topographic contours on the surface of a sandbox. The sandbox has been successfully deployed at public science events as well as professional geospatial and geodesy conferences. Landscape change, precipitation, and evaporation can all be modeled, much to the delight of our enthusiasts, who range in age from 3 to 90. Visually, as well as haptically, demonstrating the effects of events (such as dragging a hand through the sand) on a landscape, as well as the intuitive realization of meaning of topographic contour lines, has proven to be engaging.

  18. Experiencing Soil Science from your office through virtual experiences

    NASA Astrophysics Data System (ADS)

    Beato, M. Carmen; González-Merino, Ramón; Campillo, M. Carmen; Fernández-Ahumada, Elvira; Ortiz, Leovigilda; Taguas, Encarnación V.; Guerrero, José Emilio

    2017-04-01

    Currently, numerous tools based on the new information and communication technologies offer a wide range of possibilities for the implementation of interactive methodologies in Education and Science. In particular, virtual reality and immersive worlds - artificially generated computer environments where users interact through a figurative individual that represents them in that environment (their "avatar") - have been identified as the technology that will change the way we live, particularly in educational terms, product development and entertainment areas (Schmorrow, 2009). Gisbert-Cervera et al. (2011) consider that the 3D worlds in education, among others, provide a unique training and exchange of knowledge environment which allows a goal reflection to support activities and achieve learning outcomes. In Soil Sciences, the experimental component is essential to acquire the necessary knowledge to understand the biogeochemical processes taking place and their interactions with time, climate, topography and living organisms present. In this work, an immersive virtual environment which reproduces a series of pits have been developed to evaluate and differentiate soil characteristics such as texture, structure, consistency, color and other physical-chemical and biological properties for educational purposes. Bibliographical material such as pictures, books, papers and were collected in order to classify the information needed and to build the soil profiles into the virtual environment. The programming language for the virtual recreation was Unreal Engine4 (UE4; https://www.unrealengine.com/unreal-engine-4). This program was chosen because it provides two toolsets for programmers and it can also be used in tandem to accelerate development workflows. In addition, Unreal Engine4 technology powers hundreds of games as well as real-time 3D films, training simulations, visualizations and it creates very realistic graphics. For the evaluation of its impact and its usefulness in teaching, a series of surveys will be presented to undergraduate students and teachers. REFERENCES: Gisbert-Cervera M, Esteve-Gonzalez V., Camacho-Marti M.M. (2011). Delve into the Deep: Learning Potential in Metaverses and 3D Worlds. eLearning (25) Papers ISSN: 1887-1542 Schmorrow D.D. (2009). Why virtual? Theoretical Issues in Ergonomics Science 10(3): 279-282.

  19. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  20. Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.

    PubMed

    Harrop, V M

    2001-01-01

    Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery.

  1. Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.

    PubMed Central

    Harrop, V. M.

    2001-01-01

    Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery. PMID:11825189

  2. Real-time recording and classification of eye movements in an immersive virtual environment.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  3. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  4. Interactive, Online, Adsorption Lab to Support Discovery of the Scientific Process

    NASA Astrophysics Data System (ADS)

    Carroll, K. C.; Ulery, A. L.; Chamberlin, B.; Dettmer, A.

    2014-12-01

    Science students require more than methods practice in lab activities; they must gain an understanding of the application of the scientific process through lab work. Large classes, time constraints, and funding may limit student access to science labs, denying students access to the types of experiential learning needed to motivate and develop new scientists. Interactive, discovery-based computer simulations and virtual labs provide an alternative, low-risk opportunity for learners to engage in lab processes and activities. Students can conduct experiments, collect data, draw conclusions, and even abort a session. We have developed an online virtual lab, through which students can interactively develop as scientists as they learn about scientific concepts, lab equipment, and proper lab techniques. Our first lab topic is adsorption of chemicals to soil, but the methodology is transferrable to other topics. In addition to learning the specific procedures involved in each lab, the online activities will prompt exploration and practice in key scientific and mathematical concepts, such as unit conversion, significant digits, assessing risks, evaluating bias, and assessing quantity and quality of data. These labs are not designed to replace traditional lab instruction, but to supplement instruction on challenging or particularly time-consuming concepts. To complement classroom instruction, students can engage in a lab experience outside the lab and over a shorter time period than often required with real-world adsorption studies. More importantly, students can reflect, discuss, review, and even fail at their lab experience as part of the process to see why natural processes and scientific approaches work the way they do. Our Media Productions team has completed a series of online digital labs available at virtuallabs.nmsu.edu and scienceofsoil.com, and these virtual labs are being integrated into coursework to evaluate changes in student learning.

  5. Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    NASA Astrophysics Data System (ADS)

    Choe, Giseok; Nang, Jongho

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  6. Minimizing Input-to-Output Latency in Virtual Environment

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.

    2009-01-01

    A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.

  7. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  8. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  9. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image to a panoramic virtual image. It also allows visualizing, in inlay, digital data, like ancient or recent plans, cross sections, descriptions, explanatory videos, sound comments, etc. This project has lead to very convincing results, that were validated by the historians and the archaeologists who have now an interactive tool, disseminated through internet, allowing at the same time to visit virtually the castle, but also to query the system which sends back localized information. The various levels of understanding and set up details, allow an approach of first level for broad Internet users, but also a deeper approach for a group of scientists who are associated to the development of the ruins of the castle and its environment.

  10. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  11. Applications of virtual reality technology in pathology.

    PubMed

    Grimes, G J; McClellan, S A; Goldman, J; Vaughn, G L; Conner, D A; Kujawski, E; McDonald, J; Winokur, T; Fleming, W

    1997-01-01

    TelePath(SM) a telerobotic system utilizing virtual microscope concepts based on high quality still digital imaging and aimed at real-time support for surgery by remote diagnosis of frozen sections. Many hospitals and clinics have an application for the remote practice of pathology, particularly in the area of reading frozen sections in support of surgery, commonly called anatomic pathology. The goal is to project the expertise of the pathologist into the remote setting by giving the pathologist access to the microscope slides with an image quality and human interface comparable to what the pathologist would experience at a real rather than a virtual microscope. A working prototype of a virtual microscope has been defined and constructed which has the needed performance in both the image quality and human interface areas for a pathologist to work remotely. This is accomplished through the use of telerobotics and an image quality which provides the virtual microscope the same diagnostic capabilities as a real microscope. The examination of frozen sections is performed a two-dimensional world. The remote pathologist is in a virtual world with the same capabilities as a "real" microscope, but response times may be slower depending on the specific computing and telecommunication environments. The TelePath system has capabilities far beyond a normal biological microscope, such as the ability to create a low power image of the entire sample using multiple images digitally matched together; the ability to digitally retrace a viewing trajectory; and the ability to archive images using CD ROM and other mass storage devices.

  12. Virtual reality training and assessment in laparoscopic rectum surgery.

    PubMed

    Pan, Jun J; Chang, Jian; Yang, Xiaosong; Liang, Hui; Zhang, Jian J; Qureshi, Tahseen; Howell, Robert; Hickish, Tamas

    2015-06-01

    Virtual-reality (VR) based simulation techniques offer an efficient and low cost alternative to conventional surgery training. This article describes a VR training and assessment system in laparoscopic rectum surgery. To give a realistic visual performance of interaction between membrane tissue and surgery tools, a generalized cylinder based collision detection and a multi-layer mass-spring model are presented. A dynamic assessment model is also designed for hierarchy training evaluation. With this simulator, trainees can operate on the virtual rectum with both visual and haptic sensation feedback simultaneously. The system also offers surgeons instructions in real time when improper manipulation happens. The simulator has been tested and evaluated by ten subjects. This prototype system has been verified by colorectal surgeons through a pilot study. They believe the visual performance and the tactile feedback are realistic. It exhibits the potential to effectively improve the surgical skills of trainee surgeons and significantly shorten their learning curve. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Kinect-based virtual rehabilitation and evaluation system for upper limb disorders: A case study.

    PubMed

    Ding, W L; Zheng, Y Z; Su, Y P; Li, X L

    2018-04-19

    To help patients with disabilities of the arm and shoulder recover the accuracy and stability of movements, a novel and simple virtual rehabilitation and evaluation system called the Kine-VRES system was developed using Microsoft Kinect. First, several movements and virtual tasks were designed to increase the coordination, control and speed of the arm movements. The movements of the patients were then captured using the Kinect sensor, and kinematics-based interaction and real-time feedback were integrated into the system to enhance the motivation and self-confidence of the patient. Finally, a quantitative evaluation method of upper limb movements was provided using the recorded kinematics during hand-to-hand movement. A preliminary study of this rehabilitation system indicates that the shoulder movements of two participants with ataxia became smoother after three weeks of training (one hour per day). This case study demonstrated the effectiveness of the designed system, which could be promising for the rehabilitation of patients with upper limb disorders.

  14. Interfacing modeling suite Physics Of Eclipsing Binaries 2.0 with a Virtual Reality Platform

    NASA Astrophysics Data System (ADS)

    Harriett, Edward; Conroy, Kyle; Prša, Andrej; Klassner, Frank

    2018-01-01

    To explore alternate methods for modeling eclipsing binary stars, we extrapolate upon PHOEBE’s (PHysics Of Eclipsing BinariEs) capabilities in a virtual reality (VR) environment to create an immersive and interactive experience for users. The application used is Vizard, a python-scripted VR development platform for environments such as Cave Automatic Virtual Environment (CAVE) and other off-the-shelf VR headsets. Vizard allows the freedom for all modeling to be precompiled without compromising functionality or usage on its part. The system requires five arguments to be precomputed using PHOEBE’s python front-end: the effective temperature, flux, relative intensity, vertex coordinates, and orbits; the user can opt to implement other features from PHOEBE to be accessed within the simulation as well. Here we present the method for making the data observables accessible in real time. An Occulus Rift will be available for a live showcase of various cases of VR rendering of PHOEBE binary systems including detached and contact binary stars.

  15. Efficient system modeling for a small animal PET scanner with tapered DOI detectors.

    PubMed

    Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi

    2016-01-21

    A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.

  16. Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality.

    PubMed

    Cushman, Laura A; Stein, Karen; Duffy, Charles J

    2008-09-16

    Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community.

  17. An augmented reality tool for learning spatial anatomy on mobile devices.

    PubMed

    Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti

    2017-09-01

    Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, Fabien

    2012-04-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system.

  19. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  20. Programming Models for Concurrency and Real-Time

    NASA Astrophysics Data System (ADS)

    Vitek, Jan

    Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.

  1. ISSLive!

    NASA Technical Reports Server (NTRS)

    Price, Jennifer B.; Snook, Bryan

    2011-01-01

    The ISSLive! project is a JSC innovation award- winning, combined MOD/Education project to publish export control and PAO-approved ISS telemetry, and simplified and scrubbed crew timelines. The publication of this data will be real-time or near real time and will include links to the crew's social media feeds and existing streaming public video/audio feeds, via public-friendly website, mobile devices and tablet applications. Additionally, the project will offer interactive virtual 3D views of an ISS model based on real-time telemetry and a 3D virtual mission control center based on existing Front Room console positions in made for public displays. The ISSLive! project is MOD-managed and includes collaborations with subject-matter expertise from the ISS flight controllers regarding daily operations and planning, education program specialists from the JSC Office of Education, instructional designers, human computer interface experts, and software/hardware experts from MOD facility organization, and senior web designers. In support of the Agency s Strategic Goal #6 with respect to using the ISS National Laboratory for education activities, ISSLive! uses the Station itself as STEM education subject matter and provides data for STEM-based lessons plans using national standards. Specifically, ISSLive! supports and enables the National Laboratory Education (NLE) project to address the Agency s Strategic Goal #6. This goal mandates, sharing NASA with the public, educators, and students to provide opportunities to participate in our Mission, foster innovation .. ISSLive! satisfies the Agency s outcomes of Strategic Goal; that is, engages the public in NASA's missions by providing new pathways for participation (Outcome 6.3) and it informs, engages, and inspires the public by sharing NASA s missions, challenges, and results (Outcome 6.4). Additionally, ISSLive! enables MOD s support of JSC Outreach and NASA's Open Data and Open Government Initiatives. The audience for the ISSLive! website and its application(s) are: teachers, students, citizen scientists, and the general public who will be given new and interactive insights on how the ISS Operates.

  2. Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.

    2007-12-01

    The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.

  3. Technical note: real-time web-based wireless visual guidance system for radiotherapy.

    PubMed

    Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho

    2017-06-01

    Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.

  4. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  5. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  6. Development of Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Capability 2 and Experimental Plans

    NASA Technical Reports Server (NTRS)

    Lehmer, R.; Ingram, C.; Jovic, S.; Alderete, J.; Brown, D.; Carpenter, D.; LaForce, S.; Panda, R.; Walker, J.; Chaplin, P.; hide

    2006-01-01

    The Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Project, an element cf NASA's Virtual Airspace Modeling and Simulation (VAMS) Project, has been developing a distributed simulation capability that supports an extensible and expandable real-time, human-in-the-loop airspace simulation environment. The VAST-RT system architecture is based on DoD High Level Architecture (HLA) and the VAST-RT HLA Toolbox, a common interface implementation that incorporates a number of novel design features. The scope of the initial VAST-RT integration activity (Capability 1) included the high-fidelity human-in-the-loop simulation facilities located at NASA/Ames Research Center and medium fidelity pseudo-piloted target generators, such as the Airspace Traffic Generator (ATG) being developed as part of VAST-RT, as well as other real-time tools. This capability has been demonstrated in a gate-to-gate simulation. VAST-RT's (Capability 2A) has been recently completed, and this paper will discuss the improved integration of the real-time assets into VAST-RT, including the development of tools to integrate data collected across the simulation environment into a single data set for the researcher. Current plans for the completion of the VAST-RT distributed simulation environment (Capability 2B) and its use to evaluate future airspace capacity enhancing concepts being developed by VAMS will be discussed. Additionally, the simulation environment's application to other airspace and airport research projects is addressed.

  7. Simulation for transthoracic echocardiography of aortic valve

    PubMed Central

    Nanda, Navin C.; Kapur, K. K.; Kapoor, Poonam Malhotra

    2016-01-01

    Simulation allows interactive transthoracic echocardiography (TTE) learning using a virtual three-dimensional model of the heart and may aid in the acquisition of the cognitive and technical skills needed to perform TTE. The ability to link probe manipulation, cardiac anatomy, and echocardiographic images using a simulator has been shown to be an effective model for training anesthesiology residents in transesophageal echocardiography. A proposed alternative to real-time reality patient-based learning is simulation-based training that allows anesthesiologists to learn complex concepts and procedures, especially for specific structures such as aortic valve. PMID:27397455

  8. Virtual rounds: simulation-based education in procedural medicine

    NASA Astrophysics Data System (ADS)

    Shaffer, David W.; Meglan, Dwight A.; Ferrell, Margaret; Dawson, Steven L.

    1999-07-01

    Computer-based simulation is a goal for training physicians in specialties where traditional training puts patients at risk. Intuitively, interactive simulation of anatomy, pathology, and therapeutic actions should lead to shortening of the learning curve for novice or inexperienced physicians. Effective transfer of knowledge acquired in simulators must be shown for such devices to be widely accepted in the medical community. We have developed an Interventional Cardiology Training Simulator which incorporates real-time graphic interactivity coupled with haptic response, and an embedded curriculum permitting rehearsal, hypertext links, personal archiving and instructor review and testing capabilities. This linking of purely technical simulation with educational content creates a more robust educational purpose for procedural simulators.

  9. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  10. Augmenting the access grid using augmented reality

    NASA Astrophysics Data System (ADS)

    Li, Ying

    2012-01-01

    The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.

  11. Socio-Linguistic Factors and Gender Mapping Across Real and Virtual World Cultures

    DTIC Science & Technology

    2012-07-25

    multiplayer online games and other virtual world environments. Which in- game features...decaste@sfu.ca ABSTRACT   This  study  examines  a  large  corpus  of   online   gaming  chat  and  avatar  names   to...chat  interactions  in   online   gaming   environments.     In   addition,   we   study   the   relationship  

  12. Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality

    PubMed Central

    Cushman, Laura A.; Stein, Karen; Duffy, Charles J.

    2008-01-01

    Background: Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Methods: Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). Results: We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Conclusions: Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community. GLOSSARY AD = Alzheimer disease; EAD = early Alzheimer disease; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; ONC = older normal control; std. wt. = standardized weight; THSD = Tukey honestly significant difference; VR = virtual reality; YNC = young normal control. PMID:18794491

  13. Conception et mise au point d'un emulateur de machine Synchrone trapezoidale a aimants permanents

    NASA Astrophysics Data System (ADS)

    Lessard, Francois

    The development of technology leads inevitably to higher systems' complexity faced by engineers. Over time, tools are often developed in parallel with the main systems to ensure their sustainability. The work presented in this document provides a new tool for testing motor drives. In general, this project refers to active loads, which are complex dynamic loads emulated electronically with a static converter. Specifically, this document proposes and implements a system whose purpose is to recreate the behaviour of a trapezoidal permanent magnets synchronous machine. The ultimate goal is to connect a motor drive to the three terminal of the motor emulator, as it would with a real motor. The emulator's response then obtained, when subjected to disturbances of the motor drive, is ideally identical to the one of a real motor. The motor emulator led to a significant versatility of a test bench because the electrical and mechanical parameters of the application can be easily modified. The work is divided into two main parts: the static converter and real-rime. Overall, these two entities form a PHIL (Power Hardware-in-the-loop) real-time simulation. The static converter enables the exchange of real power between the drive motor and the real-time simulation. The latter gives the application the intelligence needed to interact with the motor drive in a way which the desired behaviour is recreated. The main partner of this project, Opal-RT, ensures this development. Keywords: virtual machine, PHIL, real-time simulation, electronic load

  14. The photoelectric effect and study of the diffraction of light: Two new experiments in UNILabs virtual and remote laboratories network

    NASA Astrophysics Data System (ADS)

    Pedro Sánchez, Juan; Sáenz, Jacobo; de la Torre, Luis; Carreras, Carmen; Yuste, Manuel; Heradio, Rubén; Dormido, Sebastián

    2016-05-01

    This work describes two experiments: "study of the diffraction of light: Fraunhofer approximation" and "the photoelectric effect". Both of them count with a virtual, simulated, version of the experiment as well as with a real one which can be operated remotely. The two previous virtual and remote labs (built using Easy Java(script) Simulations) are integrated in UNILabs, a network of online interactive laboratories based on the free Learning Management System Moodle. In this web environment, students can find not only the virtual and remote labs but also manuals with related theory, the user interface description for each application, and so on.

  15. Study on the Effectiveness of Virtual Reality Game-Based Training on Balance and Functional Performance in Individuals with Paraplegia

    PubMed Central

    Khurana, Meetika; Walia, Shefali

    2017-01-01

    Objective: To determine whether there is any difference between virtual reality game–based balance training and real-world task-specific balance training in improving sitting balance and functional performance in individuals with paraplegia. Methods: The study was a pre test–post test experimental design. There were 30 participants (28 males, 2 females) with traumatic spinal cord injury randomly assigned to 2 groups (group A and B). The levels of spinal injury of the participants were between T6 and T12. The virtual reality game–based balance training and real-world task-specific balance training were used as interventions in groups A and B, respectively. The total duration of the intervention was 4 weeks, with a frequency of 5 times a week; each training session lasted 45 minutes. The outcome measures were modified Functional Reach Test (mFRT), t-shirt test, and the self-care component of the Spinal Cord Independence Measure–III (SCIM-III). Results: There was a significant difference for time (p = .001) and Time × Group effect (p = .001) in mFRT scores, group effect (p = .05) in t-shirt test scores, and time effect (p = .001) in the self-care component of SCIM-III. Conclusions: Virtual reality game–based training is better in improving balance and functional performance in individuals with paraplegia than real-world task-specific balance training. PMID:29339902

  16. Study on the Effectiveness of Virtual Reality Game-Based Training on Balance and Functional Performance in Individuals with Paraplegia.

    PubMed

    Khurana, Meetika; Walia, Shefali; Noohu, Majumi M

    2017-01-01

    Objective: To determine whether there is any difference between virtual reality game-based balance training and real-world task-specific balance training in improving sitting balance and functional performance in individuals with paraplegia. Methods: The study was a pre test-post test experimental design. There were 30 participants (28 males, 2 females) with traumatic spinal cord injury randomly assigned to 2 groups (group A and B). The levels of spinal injury of the participants were between T6 and T12. The virtual reality game-based balance training and real-world task-specific balance training were used as interventions in groups A and B, respectively. The total duration of the intervention was 4 weeks, with a frequency of 5 times a week; each training session lasted 45 minutes. The outcome measures were modified Functional Reach Test (mFRT), t-shirt test, and the self-care component of the Spinal Cord Independence Measure-III (SCIM-III). Results: There was a significant difference for time ( p = .001) and Time × Group effect ( p = .001) in mFRT scores, group effect ( p = .05) in t-shirt test scores, and time effect ( p = .001) in the self-care component of SCIM-III. Conclusions: Virtual reality game-based training is better in improving balance and functional performance in individuals with paraplegia than real-world task-specific balance training.

  17. The Modeling of Virtual Environment Distance Education

    NASA Astrophysics Data System (ADS)

    Xueqin, Chang

    This research presented a virtual environment that integrates in a virtual mockup services available in a university campus for students and teachers communication in different actual locations. Advantages of this system include: the remote access to a variety of services and educational tools, the representation of real structures and landscapes in an interactive 3D model that favors localization of services and preserves the administrative organization of the university. For that, the system was implemented a control access for users and an interface to allow the use of previous educational equipments and resources not designed for distance education mode.

  18. Modeling infectious diseases dissemination through online role-playing games.

    PubMed

    Balicer, Ran D

    2007-03-01

    As mathematical modeling of infectious diseases becomes increasingly important for developing public health policies, a novel platform for such studies might be considered. Millions of people worldwide play interactive online role-playing games, forming complex and rich networks among their virtual characters. An unexpected outbreak of an infective communicable disease (unplanned by the game creators) recently occurred in this virtual world. This outbreak holds surprising similarities to real-world epidemics. It is possible that these virtual environments could serve as a platform for studying the dissemination of infectious diseases, and as a testing ground for novel interventions to control emerging communicable diseases.

  19. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Hatziminaoglou, Evanthia; Chéreau, Fabien

    2009-03-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility (SAF) developed in the Virtual Observatory Project Office. VirGO enables astronomers to discover and select data easily from millions of observations in a visual and intuitive way. It allows real-time access and the graphical display of a large number of observations by showing instrumental footprints and image previews, as well as their selection and filtering for subsequent download from the ESO SAF web interface. It also permits the loading of external FITS files or VOTables, as well as the superposition of Digitized Sky Survey images to be used as background. All data interfaces are based on Virtual Observatory (VO) standards that allow access to images and spectra from external data centres, and interaction with the ESO SAF web interface or any other VO applications.

  20. A COTS-Based Replacement Strategy for Aging Avionics Computers

    DTIC Science & Technology

    2001-12-01

    Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace

  1. [Virtual reality in neurosurgery].

    PubMed

    Tronnier, V M; Staubert, A; Bonsanto, M M; Wirtz, C R; Kunze, S

    2000-03-01

    Virtual reality enables users to immerse themselves in a virtual three-dimensional world and to interact in this world. The simulation is different from the kind in computer games, in which the viewer is active but acts in a nonrealistic world, or on the TV screen, where we are passively driven in an active world. In virtual reality elements look realistic, they change their characteristics and have almost real-world unpredictability. Virtual reality is not only implemented in gambling dens and the entertainment industry but also in manufacturing processes (cars, furniture etc.), military applications and medicine. Especially the last two areas are strongly correlated, because telemedicine or telesurgery was originated for military reasons to operate on war victims from a secure distance or to perform surgery on astronauts in an orbiting space station. In medicine and especially neurosurgery virtual-reality methods are used for education, surgical planning and simulation on a virtual patient.

  2. Hirarchical emotion calculation model for virtual human modellin - biomed 2010.

    PubMed

    Zhao, Yue; Wright, David

    2010-01-01

    This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.

  3. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments

    PubMed Central

    Oh, Soo Youn; Bailenson, Jeremy; Krämer, Nicole; Li, Benjamin

    2016-01-01

    Previous studies demonstrated the positive effects of smiling on interpersonal outcomes. The present research examined if enhancing one’s smile in a virtual environment could lead to a more positive communication experience. In the current study, participants’ facial expressions were tracked and mapped on a digital avatar during a real-time dyadic conversation. The avatar’s smile was rendered such that it was either a slightly enhanced version or a veridical version of the participant’s actual smile. Linguistic analyses using the Linguistic Inquiry Word Count (LIWC) revealed that participants who communicated with each other via avatars that exhibited enhanced smiles used more positive words to describe their interaction experience compared to those who communicated via avatars that displayed smiling behavior reflecting the participants’ actual smiles. In addition, self-report measures showed that participants in the ‘enhanced smile’ condition felt more positive affect after the conversation and experienced stronger social presence compared to the ‘normal smile’ condition. These results are particularly striking when considering the fact that most participants (>90%) were unable to detect the smiling manipulation. This is the first study to demonstrate the positive effects of transforming unacquainted individuals’ actual smiling behavior during a real-time avatar-networked conversation. PMID:27603784

  4. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  5. Distance underestimation in virtual space is sensitive to gender but not activity-passivity or mode of interaction.

    PubMed

    Foreman, Nigel; Sandamas, George; Newson, David

    2004-08-01

    Four groups of undergraduates (half of each gender) experienced a movement along a corridor containing three distinctive objects, in a virtual environment (VE) with wide-screen projection. One group simulated walking along the virtual corridor using a proprietary step-exercise device. A second group moved along the corridor in conventional flying mode, depressing a keyboard key to initiate continuous forward motion. Two further groups observed the walking and flying participants, by viewing their progress on the screen. Participants then had to walk along a real equivalent but empty corridor, and indicate the positions of the three objects. All groups underestimated distances in the real corridor, the greatest underestimates occurring for the middle distance object. Males' underestimations were significantly lower than females' at all distances. However, there was no difference between the active participants and passive observers, nor between walking and flying conditions.

  6. Using a virtual world for robot planning

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian

    2012-06-01

    We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.

  7. Time evolution, Lamb shift, and emission spectra of spontaneous emission of two identical atoms

    NASA Astrophysics Data System (ADS)

    Wang, Da-Wei; Li, Zheng-Hong; Zheng, Hang; Zhu, Shi-Yao

    2010-04-01

    A unitary transformation method is used to investigate the dynamic evolution of two multilevel atoms, in the basis of symmetric and antisymmetric states, with one atom being initially prepared in the first excited state and the other in the ground state. The unitary transformation guarantees that our calculations are based on the ground state of the atom-field system and the self-energy is subtracted at the beginning. The total Lamb shifts of the symmetric and antisymmetric states are divided into transformed shift and dynamic shift. The transformed shift is due to emitting and reabsorbing of virtual photons, by a single atom (nondynamic single atomic shift) and between the two atoms (quasi-static shift). The dynamic shift is due to the emitting and reabsorbing of real photons, by a single atom (dynamic single atomic shift) and between the two atoms (dynamic interatomic shift). The emitting and reabsorbing of virtual and real photons between the two atoms result in the interatomic shift, which does not exist for the one-atom case. The spectra at the long-time limit are calculated. If the distance between the two atoms is shorter than or comparable to the wavelength, the strong coupling between the two atoms splits the spectrum into two peaks, one from the symmetric state and the other from the antisymmetric state. The origin of the red or blue shifts for the symmetric and antisymmetric states mainly lies in the negative or positive interaction energy between the two atoms. In the investigation of the short time evolution, we find the modification of the effective density of states by the interaction between two atoms can modulate the quantum Zeno and quantum anti-Zeno effects in the decays of the symmetric and antisymmetric states.

  8. Interreality: The Experiential Use of Technology in the Treatment of Obesity

    PubMed Central

    G, Riva; B.K, Wiederhold; F, Mantovani; A, Gaggioli

    2011-01-01

    For many of us, obesity is the outcome of an energy imbalance: more energy input than expenditure. However, our waistlines are growing in spite of the huge amount of diets and fat-free/low-calorie products available to cope with this issue. Even when we are able to reduce our waistlines, maintaining the new size is very difficult: in the year after the end of a nutritional and/or behavioral treatment obese persons typically regain from 30% to 50% of their initial losses. A possible strategy for improving the treatment of obesity is the use of advanced information technologies. In the past, different technologies (internet, virtual reality, mobile phones) have shown promising effects in producing a healthy lifestyle in obese patients. Here we suggest that a new technological paradigm - Interreality – that integrates assessment and treatment within a hybrid experiential environment - including both virtual and real worlds - has the potential to improve the clinical outcome of obesity treatments. The potential advantages offered by this approach are: (a) an extended sense of presence: Interreality uses advanced simulations (virtual experiences) to transform health guidelines and provisions in experiences; (b) an extended sense of community: Interreality uses virtual communities to provide users with targeted – but also anonymous, if required - social support in both real and virtual worlds; (c) real-time feedback between physical and virtual worlds: Interreality uses bio and activity sensors and devices (smartphones) both to track in real time the behavior/health status of the user, and to provide targeted suggestions and guidelines. This paper describes in detail the different technologies involved in the Interreality vision. In order to illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Daniela, a 35-year-old fast-food worker with obesity problems. PMID:21559236

  9. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  10. Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.

    PubMed

    Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T

    2015-03-01

    With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.

  11. WWW creates new interactive 3D graphics and collaborative environments for medical research and education.

    PubMed

    Samothrakis, S; Arvanitis, T N; Plataniotis, A; McNeill, M D; Lister, P F

    1997-11-01

    Virtual Reality Modelling Language (VRML) is the start of a new era for medicine and the World Wide Web (WWW). Scientists can use VRML across the Internet to explore new three-dimensional (3D) worlds, share concepts and collaborate together in a virtual environment. VRML enables the generation of virtual environments through the use of geometric, spatial and colour data structures to represent 3D objects and scenes. In medicine, researchers often want to interact with scientific data, which in several instances may also be dynamic (e.g. MRI data). This data is often very large and is difficult to visualise. A 3D graphical representation can make the information contained in such large data sets more understandable and easier to interpret. Fast networks and satellites can reliably transfer large data sets from computer to computer. This has led to the adoption of remote tale-working in many applications including medical applications. Radiology experts, for example, can view and inspect in near real-time a 3D data set acquired from a patient who is in another part of the world. Such technology is destined to improve the quality of life for many people. This paper introduces VRML (including some technical details) and discusses the advantages of VRML in application developing.

  12. A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.

    PubMed

    Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio

    2017-01-01

    A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.

  13. A novel scene management technology for complex virtual battlefield environment

    NASA Astrophysics Data System (ADS)

    Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan

    2018-04-01

    The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.

  14. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment.

    PubMed

    Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung

    2017-09-22

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.

  15. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment

    PubMed Central

    Chong, Ilyoung

    2017-01-01

    Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590

  16. Classification and overview of research in real-time imaging

    NASA Astrophysics Data System (ADS)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  17. X3DOM as Carrier of the Virtual Heritage

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Behr, J.; Graf, H.

    2011-09-01

    Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.

  18. Real-time human-robot interaction underlying neurorobotic trust and intent recognition.

    PubMed

    Bray, Laurence C Jayet; Anumandla, Sridhar R; Thibeault, Corey M; Hoang, Roger V; Goodman, Philip H; Dascalu, Sergiu M; Bryant, Bobby D; Harris, Frederick C

    2012-08-01

    In the past three decades, the interest in trust has grown significantly due to its important role in our modern society. Everyday social experience involves "confidence" among people, which can be interpreted at the neurological level of a human brain. Recent studies suggest that oxytocin is a centrally-acting neurotransmitter important in the development and alteration of trust. Its administration in humans seems to increase trust and reduce fear, in part by directly inhibiting the amygdala. However, the cerebral microcircuitry underlying this mechanism is still unknown. We propose the first biologically realistic model for trust, simulating spiking neurons in the cortex in a real-time human-robot interaction simulation. At the physiological level, oxytocin cells were modeled with triple apical dendrites characteristic of their structure in the paraventricular nucleus of the hypothalamus. As trust was established in the simulation, this architecture had a direct inhibitory effect on the amygdala tonic firing, which resulted in a willingness to exchange an object from the trustor (virtual neurorobot) to the trustee (human actor). Our software and hardware enhancements allowed the simulation of almost 100,000 neurons in real time and the incorporation of a sophisticated Gabor mechanism as a visual filter. Our brain was functional and our robotic system was robust in that it trusted or distrusted a human actor based on movement imitation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.

    PubMed

    Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush

    2016-08-01

    This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.

  20. Borehole radar interferometry revisited

    USGS Publications Warehouse

    Liu, Lanbo; Ma, Chunguang; Lane, John W.; Joesten, Peter K.

    2014-01-01

    Single-hole, multi-offset borehole-radar reflection (SHMOR) is an effective technique for fracture detection. However, commercial radar system limitations hinder the acquisition of multi-offset reflection data in a single borehole. Transforming cross-hole transmission mode radar data to virtual single-hole, multi-offset reflection data using a wave interferometric virtual source (WIVS) approach has been proposed but not fully demonstrated. In this study, we compare WIVS-derived virtual single-hole, multi-offset reflection data to real SHMOR radar reflection profiles using cross-hole and single-hole radar data acquired in two boreholes located at the University of Connecticut (Storrs, CT USA). The field data results are similar to full-waveform numerical simulations developed for a two-borehole model. The reflection from the adjacent borehole is clearly imaged by both the real and WIVS-derived virtual reflection profiles. Reflector travel-time changes induced by deviation of the two boreholes from the vertical can also be observed on the real and virtual reflection profiles. The results of this study demonstrate the potential of the WIVS approach to improve bedrock fracture imaging for hydrogeological and petroleum reservoir development applications.

  1. Augmented Virtual Reality Laboratory

    NASA Technical Reports Server (NTRS)

    Tully-Hanson, Benjamin

    2015-01-01

    Real time motion tracking hardware has for the most part been cost prohibitive for research to regularly take place until recently. With the release of the Microsoft Kinect in November 2010, researchers now have access to a device that for a few hundred dollars is capable of providing redgreenblue (RGB), depth, and skeleton data. It is also capable of tracking multiple people in real time. For its original intended purposes, i.e. gaming, being used with the Xbox 360 and eventually Xbox One, it performs quite well. However, researchers soon found that although the sensor is versatile, it has limitations in real world applications. I was brought aboard this summer by William Little in the Augmented Virtual Reality (AVR) Lab at Kennedy Space Center to find solutions to these limitations.

  2. Online virtual cases to teach resource stewardship.

    PubMed

    Zhou, Linghong Linda; Tait, Gordon; Sandhu, Sharron; Steiman, Amanda; Lake, Shirley

    2018-06-11

    As health care costs rise, medical education must focus on high-value clinical decision making. To teach and assess efficient resource use in rheumatology, online virtual interactive cases (VICs) were developed to simulate real patient encounters to increase price transparency and reinforce cost consciousness. To teach and assess efficient resource use in rheumatology, online virtual interactive cases (VICs) were developed METHODS: The VIC modules were distributed to a sample of medical students and internal medicine residents, who were required to assess patients, order appropriate investigations, develop differential diagnoses and formulate management plans. Each action was associated with a time and price, with the totals compared against ideals. Trainees were evaluated not only on their diagnosis and patient management, but also on the total time, cost and value of their selected workup. Trainee responses were tracked anonymously, with opportunity to provide feedback at the end of each case. Seventeen medical trainees completed a total of 48 VIC modules. On average, trainees spent CAN $227.52 and 68 virtual minutes on each case, which was lower than expected. This may have been the result of a low management score of 52.4%, although on average 92.0% of participants in each case achieved the correct diagnosis. In addition, 85.7% felt more comfortable working up similar cases, and 57.1% believed that the modules increased their ability to appropriately order cost-conscious rheumatology investigations. Our initial assessment of the VIC rheumatology modules was positive, supporting their role as an effective tool in teaching an approach to rheumatology patients, with an emphasis on resource stewardship. Future directions include the expansion of cases, based on feedback, wider dissemination and an evaluation of learning retention. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  3. Augmenting your own reality: student authoring of science-based augmented reality games.

    PubMed

    Klopfer, Eric; Sheldon, Josh

    2010-01-01

    Augmented Reality (AR) simulations superimpose a virtual overlay of data and interactions onto a real-world context. The simulation engine at the heart of this technology is built to afford elements of game play that support explorations and learning in students' natural context--their own community and surroundings. In one of the more recent games, TimeLab 2100, players role-play citizens of the early 22nd century when global climate change is out of control. Through AR, they see their community as it might be nearly one hundred years in the future. TimeLab and other similar AR games balance location specificity and portability--they are games that are tied to a location and games that are movable from place to place. Focusing students on developing their own AR games provides the best of both virtual and physical worlds: a more portable solution that deeply connects young people to their own surroundings. A series of initiatives has focused on technical and pedagogical solutions to supporting students authoring their own games.

  4. Virtual reality, augmented reality…I call it i-Reality.

    PubMed

    Grossmann, Rafael J

    2015-01-01

    The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients.

  5. Can Virtual Science Foster Real Skills? A Study of Inquiry Skills in a Virtual World

    ERIC Educational Resources Information Center

    Dodds, Heather E.

    2013-01-01

    Online education has grown into a part of the educational market answering the demand for learning at the learner's choice of time and place. Inquiry skills such as observing, questioning, collecting data, and devising fair experiments are an essential element of 21st-century online science coursework. Virtual immersive worlds such as Second Life…

  6. Virtual Laparoscopic Training System Based on VCH Model.

    PubMed

    Tang, Jiangzhou; Xu, Lang; He, Longjun; Guan, Songluan; Ming, Xing; Liu, Qian

    2017-04-01

    Laparoscopy has been widely used to perform abdominal surgeries, as it is advantageous in that the patients experience lower post-surgical trauma, shorter convalescence, and less pain as compared to traditional surgery. Laparoscopic surgeries require precision; therefore, it is imperative to train surgeons to reduce the risk of operation. Laparoscopic simulators offer a highly realistic surgical environment by using virtual reality technology, and it can improve the training efficiency of laparoscopic surgery. This paper presents a virtual Laparoscopic surgery system. The proposed system utilizes the Visible Chinese Human (VCH) to construct the virtual models and simulates real-time deformation with both improved special mass-spring model and morph target animation. Meanwhile, an external device that integrates two five-degrees-of-freedom (5-DOF) manipulators was designed and made to interact with the virtual system. In addition, the proposed system provides a modular tool based on Unity3D to define the functions and features of instruments and organs, which could help users to build surgical training scenarios quickly. The proposed virtual laparoscopic training system offers two kinds of training mode, skills training and surgery training. In the skills training mode, the surgeons are mainly trained for basic operations, such as laparoscopic camera, needle, grasp, electric coagulation, and suturing. In the surgery-training mode, the surgeons can practice cholecystectomy and removal of hepatic cysts by guided or non-guided teaching.

  7. Virtual Reality as Innovative Approach to the Interior Designing

    NASA Astrophysics Data System (ADS)

    Kaleja, Pavol; Kozlovská, Mária

    2017-06-01

    We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies

  8. Application of physics engines in virtual worlds

    NASA Astrophysics Data System (ADS)

    Norman, Mark; Taylor, Tim

    2002-03-01

    Dynamic virtual worlds potentially can provide a much richer and more enjoyable experience than static ones. To realize such worlds, three approaches are commonly used. The first of these, and still widely applied, involves importing traditional animations from a modeling system such as 3D Studio Max. This approach is therefore limited to predefined animation scripts or combinations/blends thereof. The second approach involves the integration of some specific-purpose simulation code, such as car dynamics, and is thus generally limited to one (class of) application(s). The third approach involves the use of general-purpose physics engines, which promise to enable a range of compelling dynamic virtual worlds and to considerably speed up development. By far the largest market today for real-time simulation is computer games, revenues exceeding those of the movie industry. Traditionally, the simulation is produced by game developers in-house for specific titles. However, off-the-shelf middleware physics engines are now available for use in games and related domains. In this paper, we report on our experiences of using middleware physics engines to create a virtual world as an interactive experience, and an advanced scenario where artificial life techniques generate controllers for physically modeled characters.

  9. Sculpting 3D worlds with music: advanced texturing techniques

    NASA Astrophysics Data System (ADS)

    Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.

    1996-04-01

    Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.

  10. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.

  11. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  12. Secure environment for real-time tele-collaboration on virtual simulation of radiation treatment planning.

    PubMed

    Ntasis, Efthymios; Maniatis, Theofanis A; Nikita, Konstantina S

    2003-01-01

    A secure framework is described for real-time tele-collaboration on Virtual Simulation procedure of Radiation Treatment Planning. An integrated approach is followed clustering the security issues faced by the system into organizational issues, security issues over the LAN and security issues over the LAN-to-LAN connection. The design and the implementation of the security services are performed according to the identified security requirements, along with the need for real time communication between the collaborating health care professionals. A detailed description of the implementation is given, presenting a solution, which can directly be tailored to other tele-collaboration services in the field of health care. The pilot study of the proposed security components proves the feasibility of the secure environment, and the consistency with the high performance demands of the application.

  13. Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work

    NASA Astrophysics Data System (ADS)

    Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok

    2014-09-01

    This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.

  14. Speech-Enabled Tools for Augmented Interaction in E-Learning Applications

    ERIC Educational Resources Information Center

    Selouani, Sid-Ahmed A.; Lê, Tang-Hô; Benahmed, Yacine; O'Shaughnessy, Douglas

    2008-01-01

    This article presents systems that use speech technology, to emulate the one-on-one interaction a student can get from a virtual instructor. A web-based learning tool, the Learn IN Context (LINC+) system, designed and used in a real mixed-mode learning context for a computer (C++ language) programming course taught at the Université de Moncton…

  15. Social Networking Sites and Cyberdemocracy: A New Model of Dialogic Interactivity and Political Moblization in the Case of South Korea

    ERIC Educational Resources Information Center

    Chun, Heasun

    2013-01-01

    The primary purpose of this study is to test whether dialogic interactions via SNSs can help revive political participation and help citizens to become involved in real-world politics. In a Tocquevillian sense, this study assumes a positive relationship between virtual associational life and political participation and therefore argues that SNSs…

  16. Virtual Labs and Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boehler, Ted

    2006-12-01

    Virtual Labs and Virtual Worlds Coastline Community College has under development several virtual lab simulations and activities that range from biology, to language labs, to virtual discussion environments. Imagine a virtual world that students enter online, by logging onto their computer from home or anywhere they have web access. Upon entering this world they select a personalized identity represented by a digitized character (avatar) that can freely move about, interact with the environment, and communicate with other characters. In these virtual worlds, buildings, gathering places, conference rooms, labs, science rooms, and a variety of other “real world” elements are evident. When characters move about and encounter other people (players) they may freely communicate. They can examine things, manipulate objects, read signs, watch video clips, hear sounds, and jump to other locations. Goals of critical thinking, social interaction, peer collaboration, group support, and enhanced learning can be achieved in surprising new ways with this innovative approach to peer-to-peer communication in a virtual discussion world. In this presentation, short demos will be given of several online learning environments including a virtual biology lab, a marine science module, a Spanish lab, and a virtual discussion world. Coastline College has been a leader in the development of distance learning and media-based education for nearly 30 years and currently offers courses through PDA, Internet, DVD, CD-ROM, TV, and Videoconferencing technologies. Its distance learning program serves over 20,000 students every year. sponsor Jerry Meisner

  17. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  18. The virtues of virtual reality in exposure therapy.

    PubMed

    Gega, Lina

    2017-04-01

    Virtual reality can be more effective and less burdensome than real-life exposure. Optimal virtual reality delivery should incorporate in situ direct dialogues with a therapist, discourage safety behaviours, allow for a mismatch between virtual and real exposure tasks, and encourage self-directed real-life practice between and beyond virtual reality sessions. © The Royal College of Psychiatrists 2017.

  19. Our Experiment in Online, Real-Time Reference.

    ERIC Educational Resources Information Center

    Broughton, Kelly

    2001-01-01

    Describes experiences in providing real-time online reference services to users with remote Web access at the Bowling Green State University library. Discusses the decision making process first used to select HumanClick software to communicate via chat; and the selection of a fee-based customer service product, Virtual Reference Desk. (LRW)

  20. Rapid prototyping, astronaut training, and experiment control and supervision: distributed virtual worlds for COLUMBUS, the European Space Laboratory module

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen

    2002-02-01

    In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.

  1. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment.

    PubMed

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C; Poizner, Howard; Liu, Thomas T

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects' brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as "theory of mind." However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners' operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording.

  2. Game controller modification for fMRI hyperscanning experiments in a cooperative virtual reality environment

    PubMed Central

    Trees, Jason; Snider, Joseph; Falahpour, Maryam; Guo, Nick; Lu, Kun; Johnson, Douglas C.; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    Hyperscanning, an emerging technique in which data from multiple interacting subjects’ brains are simultaneously recorded, has become an increasingly popular way to address complex topics, such as “theory of mind.” However, most previous fMRI hyperscanning experiments have been limited to abstract social interactions (e.g. phone conversations). Our new method utilizes a virtual reality (VR) environment used for military training, Virtual Battlespace 2 (VBS2), to create realistic avatar-avatar interactions and cooperative tasks. To control the virtual avatar, subjects use a MRI compatible Playstation 3 game controller, modified by removing all extraneous metal components and replacing any necessary ones with 3D printed plastic models. Control of both scanners’ operation is initiated by a VBS2 plugin to sync scanner time to the known time within the VR environment. Our modifications include:•Modification of game controller to be MRI compatible.•Design of VBS2 virtual environment for cooperative interactions.•Syncing two MRI machines for simultaneous recording. PMID:26150964

  3. A three-dimensional virtual environment for modeling mechanical cardiopulmonary interactions.

    PubMed

    Kaye, J M; Primiano, F P; Metaxas, D N

    1998-06-01

    We have developed a real-time computer system for modeling mechanical physiological behavior in an interactive, 3-D virtual environment. Such an environment can be used to facilitate exploration of cardiopulmonary physiology, particularly in situations that are difficult to reproduce clinically. We integrate 3-D deformable body dynamics with new, formal models of (scalar) cardiorespiratory physiology, associating the scalar physiological variables and parameters with the corresponding 3-D anatomy. Our framework enables us to drive a high-dimensional system (the 3-D anatomical models) from one with fewer parameters (the scalar physiological models) because of the nature of the domain and our intended application. Our approach is amenable to modeling patient-specific circumstances in two ways. First, using CT scan data, we apply semi-automatic methods for extracting and reconstructing the anatomy to use in our simulations. Second, our scalar physiological models are defined in terms of clinically measurable, patient-specific parameters. This paper describes our approach, problems we have encountered and a sample of results showing normal breathing and acute effects of pneumothoraces.

  4. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.

  5. Virtual geotechnical laboratory experiments using a simulator

    NASA Astrophysics Data System (ADS)

    Penumadu, Dayakar; Zhao, Rongda; Frost, David

    2000-04-01

    The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.

  6. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  7. Tele Hyper Virtuality

    NASA Technical Reports Server (NTRS)

    Terashima, Nobuyoshi

    1994-01-01

    In the future, remote images sent over communication lines will be reproduced in virtual reality (VR). This form of virtual telecommunications, which will allow observers to engage in an activity as though it were real, is the focus of considerable attention. Taken a step further, real and unreal objects will be placed in a single space to create an extremely realistic environment. Here, imaginary and other life forms as well as people and animals in remote locations will gather via telecommunication lines that create a common environment where life forms can work and interact together. Words, gestures, diagrams and other forms of communication will be used freely in performing work. Actual construction of a system based on this new concept will not only provide people with experiences that would have been impossible in the past, but will also inspire new applications in which people will function in environments where it would have been difficult if not impossible for them to function until now. This paper describes Tele Hyper Virtuality concept, its definition, applications, the key technologies to accomplish it and future prospects.

  8. Can you go the distance? Attending the virtual classroom.

    PubMed

    Bigony, Lorraine

    2010-01-01

    Distance learning via the World Wide Web offers convenience and flexibility. Online education connects nurses geographically in a manner that the traditional face-to-face learning environment lacks. Delivered in both a synchronous (real time interaction) or asynchronous (delayed interaction) format, distance programs continue to provide nurses with choice, especially in the pursuit of advanced degrees. This article explores the pros and cons of distance education, in addition to the most popular platform used in distance learning today, the Blackboard Academic Suite. Characteristics of the potential enrollee to ensure a successful distance education experience are also discussed. Distance nursing programs are here to stay. Although rigorous, the ease of accessibility makes distance learning a viable alternative for busy nurses.

  9. An Augmented Reality Nanomanipulator for Learning Nanophysics: The "NanoLearner" Platform

    NASA Astrophysics Data System (ADS)

    Marchi, Florence; Marliere, Sylvain; Florens, Jean Loup; Luciani, Annie; Chevrier, Joel

    The work focuses on the description and evaluation of an augmented reality nanomanipulator, called "NanoLearner" platform used as educational tool in practical works of nanophysics. Through virtual reality associated to multisensory renderings, students are immersed in the nanoworld where they can interact in real time with a sample surface or an object, using their senses as hearing, seeing and touching. The role of each sensorial rendering in the understanding and control of the "approach-retract" interaction has been determined thanks to statistical studies obtained during the practical works. Finally, we present two extensions of the use of this innovative tool for investigating nano effects in living organisms and for allowing grand public to have access to a natural understanding of nanophenomena.

  10. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always aware of what is visible and discussed. The discussion is supported by audio and interaction is controlled by a moderator managing turn-taking presentations. A use case execution proved a success and showed the potential of this immersive approach.

  11. Wavelets and Elman Neural Networks for monitoring environmental variables

    NASA Astrophysics Data System (ADS)

    Ciarlini, Patrizia; Maniscalco, Umberto

    2008-11-01

    An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.

  12. Physically-Based Modelling and Real-Time Simulation of Fluids.

    NASA Astrophysics Data System (ADS)

    Chen, Jim Xiong

    1995-01-01

    Simulating physically realistic complex fluid behaviors presents an extremely challenging problem for computer graphics researchers. Such behaviors include the effects of driving boats through water, blending differently colored fluids, rain falling and flowing on a terrain, fluids interacting in a Distributed Interactive Simulation (DIS), etc. Such capabilities are useful in computer art, advertising, education, entertainment, and training. We present a new method for physically-based modeling and real-time simulation of fluids in computer graphics and dynamic virtual environments. By solving the 2D Navier -Stokes equations using a CFD method, we map the surface into 3D using the corresponding pressures in the fluid flow field. This achieves realistic real-time fluid surface behaviors by employing the physical governing laws of fluids but avoiding extensive 3D fluid dynamics computations. To complement the surface behaviors, we calculate fluid volume and external boundary changes separately to achieve full 3D general fluid flow. To simulate physical activities in a DIS, we introduce a mechanism which uses a uniform time scale proportional to the clock-time and variable time-slicing to synchronize physical models such as fluids in the networked environment. Our approach can simulate many different fluid behaviors by changing the internal or external boundary conditions. It can model different kinds of fluids by varying the Reynolds number. It can simulate objects moving or floating in fluids. It can also produce synchronized general fluid flows in a DIS. Our model can serve as a testbed to simulate many other fluid phenomena which have never been successfully modeled previously.

  13. Evaluating the use of augmented reality to support undergraduate student learning in geomorphology

    NASA Astrophysics Data System (ADS)

    Ockelford, A.; Bullard, J. E.; Burton, E.; Hackney, C. R.

    2016-12-01

    Augmented Reality (AR) supports the understanding of complex phenomena by providing unique visual and interactive experiences that combine real and virtual information and help communicate abstract problems to learners. With AR, designers can superimpose virtual graphics over real objects, allowing users to interact with digital content through physical manipulation. One of the most significant pedagogic features of AR is that it provides an essentially student-centred and flexible space in which students can learn. By actively engaging participants using a design-thinking approach, this technology has the potential to provide a more productive and engaging learning environment than real or virtual learning environments alone. AR is increasingly being used in support of undergraduate learning and public engagement activities across engineering, medical and humanities disciplines but it is not widely used across the geosciences disciplines despite the obvious applicability. This paper presents preliminary results from a multi-institutional project which seeks to evaluate the benefits and challenges of using an augmented reality sand box to support undergraduate learning in geomorphology. The sandbox enables users to create and visualise topography. As the sand is sculpted, contours are projected onto the miniature landscape. By hovering a hand over the box, users can make it `rain' over the landscape and the water `flows' down in to rivers and valleys. At undergraduate level, the sand-box is an ideal focus for problem-solving exercises, for example exploring how geomorphology controls hydrological processes, how such processes can be altered and the subsequent impacts of the changes for environmental risk. It is particularly valuable for students who favour a visual or kinesthetic learning style. Results presented in this paper discuss how the sandbox provides a complex interactive environment that encourages communication, collaboration and co-design.

  14. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  15. Virtually the ultimate research lab.

    PubMed

    Kulik, Alexander

    2018-04-26

    Virtual reality (VR) can serve as a viable platform for psychological research. The real world with many uncontrolled variables can be masked to immerse participants in complex interactive environments that are under full experimental control. However, as any other laboratory setting, these simulations are not perceived equally to reality and they also afford different behaviour. We need a better understanding of these differences, which are often related to parameters of the technical setup, to support valid interpretations of experimental results. © 2018 The British Psychological Society.

  16. Satellite -Based Networks for U-Health & U-Learning

    NASA Astrophysics Data System (ADS)

    Graschew, G.; Roelofs, T. A.; Rakowsky, S.; Schlag, P. M.

    2008-08-01

    The use of modern Information and Communication Technologies (ICT) as enabling tools for healthcare services (eHealth) introduces new ways of creating ubiquitous access to high-level medical care for all, anytime and anywhere (uHealth). Satellite communication constitutes one of the most flexible methods of broadband communication offering high reliability and cost-effectiveness of connections meeting telemedicine communication requirements. Global networks and the use of computers for educational purposes stimulate and support the development of virtual universities for e-learning. Especially real-time interactive applications can play an important role in tailored and personalised services.

  17. The Selimiye Mosque of Edirne, Turkey - AN Immersive and Interactive Virtual Reality Experience Using Htc Vive

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.

    2017-05-01

    Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.

  18. Parametric Cognitive Modeling of Information and Computer Technology Usage by People with Aging- and Disability-Derived Functional Impairments

    PubMed Central

    García-Betances, Rebeca I.; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T.

    2016-01-01

    Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users’ real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296

  19. Evaluation of the Use of a Virtual Patient on Student Competence and Confidence in Performing Simulated Clinic Visits.

    PubMed

    Taglieri, Catherine A; Crosby, Steven J; Zimmerman, Kristin; Schneider, Tulip; Patel, Dhiren K

    2017-06-01

    Objective. To assess the effect of incorporating virtual patient activities in a pharmacy skills lab on student competence and confidence when conducting real-time comprehensive clinic visits with mock patients. Methods. Students were randomly assigned to a control or intervention group. The control group completed the clinic visit prior to completing virtual patient activities. The intervention group completed the virtual patient activities prior to the clinic visit. Student proficiency was evaluated in the mock lab. All students completed additional exercises with the virtual patient and were subsequently assessed. Student impressions were assessed via a pre- and post-experience survey. Results. Student performance conducting clinic visits was higher in the intervention group compared to the control group. Overall student performance continued to improve in the subsequent module. There was no change in student confidence from pre- to post-experience. Student rating of the ease of use and realistic simulation of the virtual patient increased; however, student rating of the helpfulness of the virtual patient decreased. Despite student rating of the helpfulness of the virtual patient program, student performance improved. Conclusion. Virtual patient activities enhanced student performance during mock clinic visits. Students felt the virtual patient realistically simulated a real patient. Virtual patients may provide additional learning opportunities for students.

  20. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  1. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML).

    PubMed

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-07-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.

  2. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    PubMed Central

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760

  3. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability. (Abstract shortened by UMI.)

  4. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  5. Seeing an Embodied Virtual Hand is Analgesic Contingent on Colocation.

    PubMed

    Nierula, Birgit; Martini, Matteo; Matamala-Gomez, Marta; Slater, Mel; Sanchez-Vives, Maria V

    2017-06-01

    Seeing one's own body has been reported to have analgesic properties. Analgesia has also been described when seeing an embodied virtual body colocated with the real one. However, there is controversy regarding whether this effect holds true when seeing an illusory-owned body part, such as during the rubber-hand illusion. A critical difference between these paradigms is the distance between the real and surrogate body part. Colocation of the real and surrogate arm is possible in an immersive virtual environment, but not during illusory ownership of a rubber arm. The present study aimed at testing whether the distance between a real and a virtual arm can explain such differences in terms of pain modulation. Using a paradigm of embodiment of a virtual body allowed us to evaluate heat pain thresholds at colocation and at a 30-cm distance between the real and the virtual arm. We observed a significantly higher heat pain threshold at colocation than at a 30-cm distance. The analgesic effects of seeing a virtual colocated arm were eliminated by increasing the distance between the real and the virtual arm, which explains why seeing an illusorily owned rubber arm does not consistently result in analgesia. These findings are relevant for the use of virtual reality in pain management. Looking at a virtual body has analgesic properties similar to looking at one's real body. We identify the importance of colocation between a real and a surrogate body for this to occur and thereby resolve a scientific controversy. This information is useful for exploiting immersive virtual reality in pain management. Copyright © 2017. Published by Elsevier Inc.

  6. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  7. A virtual therapeutic environment with user projective agents.

    PubMed

    Ookita, S Y; Tokuda, H

    2001-02-01

    Today, we see the Internet as more than just an information infrastructure, but a socializing place and a safe outlet of inner feelings. Many personalities develop aside from real world life due to its anonymous environment. Virtual world interactions are bringing about new psychological illnesses ranging from netaddiction to technostress, as well as online personality disorders and conflicts in multiple identities that exist in the virtual world. Presently, there are no standard therapy models for the virtual environment. There are very few therapeutic environments, or tools especially made for virtual therapeutic environments. The goal of our research is to provide the therapy model and middleware tools for psychologists to use in virtual therapeutic environments. We propose the Cyber Therapy Model, and Projective Agents, a tool used in the therapeutic environment. To evaluate the effectiveness of the tool, we created a prototype system, called the Virtual Group Counseling System, which is a therapeutic environment that allows the user to participate in group counseling through the eyes of their Projective Agent. Projective Agents inherit the user's personality traits. During the virtual group counseling, the user's Projective Agent interacts and collaborates to recover and increase their psychological growth. The prototype system provides a simulation environment where psychologists can adjust the parameters and customize their own simulation environment. The model and tool is a first attempt toward simulating online personalities that may exist only online, and provide data for observation.

  8. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  9. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  10. VirGO: A Visual Browser for the ESO Science Archive Facility

    NASA Astrophysics Data System (ADS)

    Chéreau, F.

    2008-08-01

    VirGO is the next generation Visual Browser for the ESO Science Archive Facility developed by the Virtual Observatory (VO) Systems Department. It is a plug-in for the popular open source software Stellarium adding capabilities for browsing professional astronomical data. VirGO gives astronomers the possibility to easily discover and select data from millions of observations in a new visual and intuitive way. Its main feature is to perform real-time access and graphical display of a large number of observations by showing instrumental footprints and image previews, and to allow their selection and filtering for subsequent download from the ESO SAF web interface. It also allows the loading of external FITS files or VOTables, the superimposition of Digitized Sky Survey (DSS) background images, and the visualization of the sky in a `real life' mode as seen from the main ESO sites. All data interfaces are based on Virtual Observatory standards which allow access to images and spectra from external data centers, and interaction with the ESO SAF web interface or any other VO applications supporting the PLASTIC messaging system. The main website for VirGO is at http://archive.eso.org/cms/virgo.

  11. Fused Reality for Enhanced Flight Test Capabilities

    NASA Technical Reports Server (NTRS)

    Bachelder, Ed; Klyde, David

    2011-01-01

    The feasibility of using Fused Reality-based simulation technology to enhance flight test capabilities has been investigated. In terms of relevancy to piloted evaluation, there remains no substitute for actual flight tests, even when considering the fidelity and effectiveness of modern ground-based simulators. In addition to real-world cueing (vestibular, visual, aural, environmental, etc.), flight tests provide subtle but key intangibles that cannot be duplicated in a ground-based simulator. There is, however, a cost to be paid for the benefits of flight in terms of budget, mission complexity, and safety, including the need for ground and control-room personnel, additional aircraft, etc. A Fused Reality(tm) (FR) Flight system was developed that allows a virtual environment to be integrated with the test aircraft so that tasks such as aerial refueling, formation flying, or approach and landing can be accomplished without additional aircraft resources or the risk of operating in close proximity to the ground or other aircraft. Furthermore, the dynamic motions of the simulated objects can be directly correlated with the responses of the test aircraft. The FR Flight system will allow real-time observation of, and manual interaction with, the cockpit environment that serves as a frame for the virtual out-the-window scene.

  12. Real-time path planning in dynamic virtual environments using multiagent navigation graphs.

    PubMed

    Sud, Avneesh; Andersen, Erik; Curtis, Sean; Lin, Ming C; Manocha, Dinesh

    2008-01-01

    We present a novel approach for efficient path planning and navigation of multiple virtual agents in complex dynamic scenes. We introduce a new data structure, Multi-agent Navigation Graph (MaNG), which is constructed using first- and second-order Voronoi diagrams. The MaNG is used to perform route planning and proximity computations for each agent in real time. Moreover, we use the path information and proximity relationships for local dynamics computation of each agent by extending a social force model [Helbing05]. We compute the MaNG using graphics hardware and present culling techniques to accelerate the computation. We also address undersampling issues and present techniques to improve the accuracy of our algorithm. Our algorithm is used for real-time multi-agent planning in pursuit-evasion, terrain exploration and crowd simulation scenarios consisting of hundreds of moving agents, each with a distinct goal.

  13. Enhancement of a Virtual Geology Field Guide of Georgia Initiative Using Gigapan© and ArcGIS Online's Story Map

    NASA Astrophysics Data System (ADS)

    Mobasher, K.; Turk, H. J.; Witherspoon, W.; Tate, L.; Hoynes, J.

    2015-12-01

    A GIS geology geodatabase of Georgia was developed using ArcGIS 10.2. The geodatabase for each physiographic provinces of Georgia contains fields designed to store information regarding geologic features. Using ArcGIS online, the virtual field guide is created which provides an interactive learning experience for students to allow in real time photography, description, mapping and sharing their observations with the instructor and peers. Gigapan© facilitates visualizing geologic features at different scales with high resolutions and in their larger surrounding context. The classroom applications of the Gigapan© are limitless when teaching students the entire range of geologic structures from showcasing crystalline structures of minerals to understanding the geological processes responsible for formation of an entire mountain range. The addition of the Story Map enhances the virtual experience when you want to present a geo-located story point narrative featuring images or videos. The virtual field component and supplementary Gigapan© imagery coupled with Story Map added significantly to the detailed realism of virtual field guide further allowing students to more fully understand geological concepts at various scales. These technologies peaked students interest and facilitated their learning and preparation to function more effectively in the geosciences by developing better observations and new skills. These technologies facilitated increased student engagement in the geosciences by sharing, enhancing and transferring lecture information to actual field knowledge and experiences. This enhanced interactive learning experience not only begins to allow students to understand and recognize geologic features in the field but also increased their collaboration, enthusiasm and interest in the discipline. The increased interest and collaboration occurred as students assisted in populating a geologic geodatabase of Georgia.

  14. Can virtual science foster real skills? A study of inquiry skills in a virtual world

    NASA Astrophysics Data System (ADS)

    Dodds, Heather E.

    Online education has grown into a part of the educational market answering the demand for learning at the learner's choice of time and place. Inquiry skills such as observing, questioning, collecting data, and devising fair experiments are an essential element of 21st-century online science coursework. Virtual immersive worlds such as Second Life are being used as new frontiers in science education. There have been few studies looking specifically at science education in virtual worlds that foster inquiry skills. This quantitative quasi-experimental nonrandomized control group pretest and posttest study explored what affect a virtual world experience had on inquiry skills as measured by the TIPS (Test of Integrated Process Skills) and TIPS II (Integrated Process Skills Test II) instruments. Participants between the ages of 18 and 65 were recruited from educator mailing lists and Second Life discussion boards and then sorted into the experimental group, which received instructions to utilize several displays in Mendelian genetics at the Genome Island location within Second Life, or the control group, which received text-based PDF documents of the same genetics course content. All participants, in the form of avatars, were experienced Second Life residents to reduce any novelty effect. This study found a greater increase in inquiry skills in the experimental group interacting using a virtual world to learn science content (0.90 points) than a control group that is presented only with online text-based content (0.87 points). Using a mixed between-within ANOVA (analysis of variance), with an alpha level of 0.05, there was no significant interaction between the control or experimental groups and inquiry skills, F (1, 58) = .783, p = .380, partial eta squared = .013, at the specified .05 alpha level suggesting no significant difference as a result of the virtual world exercise. However, there is not enough evidence to state that there was no effect because there was a greater increase in scores for the group that experienced a virtual world exercise. This study adds to the increasing body of knowledge about virtual worlds and inquiry skills, particularly with adult learners.

  15. Key Success Factors of eLearning in Education: A Professional Development Model to Evaluate and Support eLearning

    ERIC Educational Resources Information Center

    FitzPatrick, Thaddeus

    2012-01-01

    Technology has changed the way that we live our lives. Interaction across continents has become a forefront of everyday engagement. With ongoing enhancements of technology, people are now able to communicate and learn in a virtual environment similar to that of the real world interaction. These improvements are shared in the field of education,…

  16. Using the PhysX engine for physics-based virtual surgery with force feedback.

    PubMed

    Maciel, Anderson; Halic, Tansel; Lu, Zhonghua; Nedel, Luciana P; De, Suvranu

    2009-09-01

    The development of modern surgical simulators is highly challenging, as they must support complex simulation environments. The demand for higher realism in such simulators has driven researchers to adopt physics-based models, which are computationally very demanding. This poses a major problem, since real-time interactions must permit graphical updates of 30 Hz and a much higher rate of 1 kHz for force feedback (haptics). Recently several physics engines have been developed which offer multi-physics simulation capabilities, including rigid and deformable bodies, cloth and fluids. While such physics engines provide unique opportunities for the development of surgical simulators, their higher latencies, compared to what is necessary for real-time graphics and haptics, offer significant barriers to their use in interactive simulation environments. In this work, we propose solutions to this problem and demonstrate how a multimodal surgical simulation environment may be developed based on NVIDIA's PhysX physics library. Hence, models that are undergoing relatively low-frequency updates in PhysX can exist in an environment that demands much higher frequency updates for haptics. We use a collision handling layer to interface between the physical response provided by PhysX and the haptic rendering device to provide both real-time tissue response and force feedback. Our simulator integrates a bimanual haptic interface for force feedback and per-pixel shaders for graphics realism in real time. To demonstrate the effectiveness of our approach, we present the simulation of the laparoscopic adjustable gastric banding (LAGB) procedure as a case study. To develop complex and realistic surgical trainers with realistic organ geometries and tissue properties demands stable physics-based deformation methods, which are not always compatible with the interaction level required for such trainers. We have shown that combining different modelling strategies for behaviour, collision and graphics is possible and desirable. Such multimodal environments enable suitable rates to simulate the major steps of the LAGB procedure.

  17. Real-time surgery simulation of intracranial aneurysm clipping with patient-specific geometries and haptic feedback

    NASA Astrophysics Data System (ADS)

    Fenz, Wolfgang; Dirnberger, Johannes

    2015-03-01

    Providing suitable training for aspiring neurosurgeons is becoming more and more problematic. The increasing popularity of the endovascular treatment of intracranial aneurysms leads to a lack of simple surgical situations for clipping operations, leaving mainly the complex cases, which present even experienced surgeons with a challenge. To alleviate this situation, we have developed a training simulator with haptic interaction allowing trainees to practice virtual clipping surgeries on real patient-specific vessel geometries. By using specialized finite element (FEM) algorithms (fast finite element method, matrix condensation) combined with GPU acceleration, we can achieve the necessary frame rate for smooth real-time interaction with the detailed models needed for a realistic simulation of the vessel wall deformation caused by the clamping with surgical clips. Vessel wall geometries for typical training scenarios were obtained from 3D-reconstructed medical image data, while for the instruments (clipping forceps, various types of clips, suction tubes) we use models provided by manufacturer Aesculap AG. Collisions between vessel and instruments have to be continuously detected and transformed into corresponding boundary conditions and feedback forces, calculated using a contact plane method. After a training, the achieved result can be assessed based on various criteria, including a simulation of the residual blood flow into the aneurysm. Rigid models of the surgical access and surrounding brain tissue, plus coupling a real forceps to the haptic input device further increase the realism of the simulation.

  18. A Real-Time Executive for Multiple-Computer Clusters.

    DTIC Science & Technology

    1984-12-01

    in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by 󈧅’b4, internal _loopback by 02"b4, clear loooback by 󈧇’b4, go offline by Ŝ"b4, eo online by 󈧍’b4, onboard _diagnostic by Oa’b4, cdr

  19. Tracking dynamic team activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tambe, M.

    1996-12-31

    AI researchers are striving to build complex multi-agent worlds with intended applications ranging from the RoboCup robotic soccer tournaments, to interactive virtual theatre, to large-scale real-world battlefield simulations. Agent tracking - monitoring other agent`s actions and inferring their higher-level goals and intentions - is a central requirement in such worlds. While previous work has mostly focused on tracking individual agents, this paper goes beyond by focusing on agent teams. Team tracking poses the challenge of tracking a team`s joint goals and plans. Dynamic, real-time environments add to the challenge, as ambiguities have to be resolved in real-time. The central hypothesismore » underlying the present work is that an explicit team-oriented perspective enables effective team tracking. This hypothesis is instantiated using the model tracing technology employed in tracking individual agents. Thus, to track team activities, team models are put to service. Team models are a concrete application of the joint intentions framework and enable an agent to track team activities, regardless of the agent`s being a collaborative participant or a non-participant in the team. To facilitate real-time ambiguity resolution with team models: (i) aspects of tracking are cast as constraint satisfaction problems to exploit constraint propagation techniques; and (ii) a cost minimality criterion is applied to constrain tracking search. Empirical results from two separate tasks in real-world, dynamic environments one collaborative and one competitive - are provided.« less

  20. Using virtual instruments to develop an actuator-based hardware-in-the-loop simulation test-bed for autopilot of unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Sun, Yun-Ping; Ju, Jiun-Yan; Liang, Yen-Chu

    2008-12-01

    Since the unmanned aerial vehicles (UAVs) bring forth many innovative applications in scientific, civilian, and military fields, the development of UAVs is rapidly growing every year. The on-board autopilot that reliably performs attitude and guidance control is a vital part for out-of-sight flights. However, the control law in autopilot is designed according to a simplified plant model in which the dynamics of real hardware are usually not taken into consideration. It is a necessity to develop a test-bed including real servos to make real-time control experiments for prototype autopilots, so called hardware-in-the-loop (HIL) simulation. In this paper on the basis of the graphical application software LabVIEW, the real-time HIL simulation system is realized efficiently by the virtual instrumentation approach. The proportional-integral-derivative (PID) controller in autopilot for the pitch angle control loop is experimentally determined by the classical Ziegler-Nichols tuning rule and exhibits good transient and steady-state response in real-time HIL simulation. From the results the differences between numerical simulation and real-time HIL simulation are also clearly presented. The effectiveness of HIL simulation for UAV autopilot design is definitely confirmed

Top