Science.gov

Sample records for 3d immersive virtual

  1. Presence Pedagogy: Teaching and Learning in a 3D Virtual Immersive World

    ERIC Educational Resources Information Center

    Bronack, Stephen; Sanders, Robert; Cheney, Amelia; Riedl, Richard; Tashner, John; Matzen, Nita

    2008-01-01

    As the use of 3D immersive virtual worlds in higher education expands, it is important to examine which pedagogical approaches are most likely to bring about success. AET Zone, a 3D immersive virtual world in use for more than seven years, is one embodiment of pedagogical innovation that capitalizes on what virtual worlds have to offer to social…

  2. An Australian and New Zealand Scoping Study on the Use of 3D Immersive Virtual Worlds in Higher Education

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.; Carlson, Lauren; Gregory, Sue; Tynan, Belinda

    2011-01-01

    This article describes the research design of, and reports selected findings from, a scoping study aimed at examining current and planned applications of 3D immersive virtual worlds at higher education institutions across Australia and New Zealand. The scoping study is the first of its kind in the region, intended to parallel and complement a…

  3. Visuomotor learning in immersive 3D virtual reality in Parkinson's disease and in aging.

    PubMed

    Messier, Julie; Adamovich, Sergei; Jack, David; Hening, Wayne; Sage, Jacob; Poizner, Howard

    2007-05-01

    Successful adaptation to novel sensorimotor contexts critically depends on efficient sensory processing and integration mechanisms, particularly those required to combine visual and proprioceptive inputs. If the basal ganglia are a critical part of specialized circuits that adapt motor behavior to new sensorimotor contexts, then patients who are suffering from basal ganglia dysfunction, as in Parkinson's disease should show sensorimotor learning impairments. However, this issue has been under-explored. We tested the ability of 8 patients with Parkinson's disease (PD), off medication, ten healthy elderly subjects and ten healthy young adults to reach to a remembered 3D location presented in an immersive virtual environment. A multi-phase learning paradigm was used having four conditions: baseline, initial learning, reversal learning and aftereffect. In initial learning, the computer altered the position of a simulated arm endpoint used for movement feedback by shifting its apparent location diagonally, requiring thereby both horizontal and vertical compensations. This visual distortion forced subjects to learn new coordinations between what they saw in the virtual environment and the actual position of their limbs, which they had to derive from proprioceptive information (or efference copy). In reversal learning, the sign of the distortion was reversed. Both elderly subjects and PD patients showed learning phase-dependent difficulties. First, elderly controls were slower than young subjects when learning both dimensions of the initial biaxial discordance. However, their performance improved during reversal learning and as a result elderly and young controls showed similar adaptation rates during reversal learning. Second, in striking contrast to healthy elderly subjects, PD patients were more profoundly impaired during the reversal phase of learning. PD patients were able to learn the initial biaxial discordance but were on average slower than age-matched controls

  4. Vasculogenesis and angiogenesis in the first trimester human placenta: an innovative 3D study using an immersive Virtual Reality system.

    PubMed

    van Oppenraaij, R H F; Koning, A H J; Lisman, B A; Boer, K; van den Hoff, M J B; van der Spek, P J; Steegers, E A P; Exalto, N

    2009-03-01

    First trimester human villous vascularization is mainly studied by conventional two-dimensional (2D) microscopy. With this (2D) technique it is not possible to observe the spatial arrangement of the haemangioblastic cords and vessels, transition of cords into vessels and the transition of vasculogenesis to angiogenesis. The Confocal Laser Scanning Microscopy (CLSM) allows for a three-dimensional (3D) reconstruction of images of early pregnancy villous vascularization. These 3D reconstructions, however, are normally analyzed on a 2D medium, lacking depth perception. We performed a descriptive morphologic study, using an immersive Virtual Reality system to utilize the full third dimension completely. This innovative 3D technique visualizes 3D datasets as enlarged 3D holograms and provided detailed insight in the spatial arrangement of first trimester villous vascularization, the beginning of lumen formation within various junctions of haemangioblastic cords between 5 and 7 weeks gestational age and in the gradual transition of vasculogenesis to angiogenesis. This innovative immersive Virtual Reality system enables new perspectives for vascular research and will be implemented for future investigation. PMID:19185915

  5. Versatile, Immersive, Creative and Dynamic Virtual 3-D Healthcare Learning Environments: A Review of the Literature

    PubMed Central

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and “serious gaming” that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger’s Diffusion of Innovations Theory and Siemens’ Connectivism Theory for today’s learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  6. Versatile, immersive, creative and dynamic virtual 3-D healthcare learning environments: a review of the literature.

    PubMed

    Hansen, Margaret M

    2008-01-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and "serious gaming" that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger's Diffusion of Innovations Theory and Siemens' Connectivism Theory for today's learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare. PMID:18762473

  7. Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues.

    PubMed

    Calì, Corrado; Baghabra, Jumana; Boges, Daniya J; Holst, Glendon R; Kreshuk, Anna; Hamprecht, Fred A; Srinivasan, Madhusudhanan; Lehväslaiho, Heikki; Magistretti, Pierre J

    2016-01-01

    Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features. PMID:26179415

  8. L2 Immersion in 3D Virtual Worlds: The Next Thing to Being There?

    ERIC Educational Resources Information Center

    Paillat, Edith

    2014-01-01

    Second Life is one of the many three-dimensional virtual environments accessible through a computer and a fast broadband connection. Thousands of participants connect to this platform to interact virtually with the world, join international communities of practice and, for some, role play groups. Unlike online role play games however, Second Life…

  9. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  10. Quality of Grasping and the Role of Haptics in a 3-D Immersive Virtual Reality Environment in Individuals With Stroke.

    PubMed

    Levin, Mindy F; Magdalon, Eliane C; Michaelsen, Stella M; Quevedo, Antonio A F

    2015-11-01

    Reaching and grasping parameters with and without haptic feedback were characterized in people with chronic post-stroke behaviors. Twelve (67 ± 10 years) individuals with chronic stroke and arm/hand paresis (Fugl-Meyer Assessment-Arm: ≥ 46/66 pts) participated. Three dimensional (3-D) temporal and spatial kinematics of reaching and grasping movements to three objects (can: cylindrical grasp; screwdriver: power grasp; pen: precision grasp) in a physical environment (PE) with and without additional haptic feedback and a 3-D virtual environment (VE) with haptic feedback were recorded. Participants reached, grasped and transported physical and virtual objects using similar movement strategies in all conditions. Reaches made in VE were less smooth and slower compared to the PE. Arm and trunk kinematics were similar in both environments and glove conditions. For grasping, stroke subjects preserved aperture scaling to object size but used wider hand apertures with longer delays between times to maximal reaching velocity and maximal grasping aperture. Wearing the glove decreased reaching velocity. Our results in a small group of subjects suggest that providing haptic information in the VE did not affect the validity of reaching and grasping movement. Small disparities in movement parameters between environments may be due to differences in perception of object distance in VE. Reach-to-grasp kinematics to smaller objects may be improved by better 3-D rendering. Comparable kinematics between environments and conditions is encouraging for the incorporation of high quality VEs in rehabilitation programs aimed at improving upper limb recovery. PMID:25594971

  11. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  12. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  13. Use of Three-Dimensional (3-D) Immersive Virtual Worlds in K-12 And Higher Education Settings: A Review of the Research

    ERIC Educational Resources Information Center

    Hew, Khe Foon; Cheung, Wing Sum

    2010-01-01

    In this paper, we review past empirical research studies on the use of three-dimensional immersive virtual worlds in education settings such as K-12 and higher education. Three questions guided our review: (1) How are virtual worlds (eg, "Active Worlds", "Second Life") used by students and teachers? (2) What types of research methods have been…

  14. Enhancing Pre-Service Teachers' Awareness to Pupils' Test-Anxiety with 3D Immersive Simulation

    ERIC Educational Resources Information Center

    Passig, David; Moshe, Ronit

    2008-01-01

    This study investigated whether participating in a 3D immersive virtual reality world simulating the experience of test-anxiety would affect preservice teachers' awareness to the phenomenon. Ninety subjects participated in this study, and were divided into three groups. The experimental group experienced a 3D immersive simulation which made…

  15. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  16. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  17. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  18. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  19. "Immersed in Learning": Supporting Creative Practice in Virtual Worlds

    ERIC Educational Resources Information Center

    Doyle, Denise

    2010-01-01

    The "Immersed in Learning" project began in 2007 to evaluate the use of 3D virtual worlds as a teaching and learning tool in undergraduate programmes in digital media at the University of Wolverhampton, UK. A question that the research set out to explore was what were the benefits of integrating 3D immersive learning with face-to-face learning for…

  20. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  1. Full Immersive Virtual Environment Cave[TM] in Chemistry Education

    ERIC Educational Resources Information Center

    Limniou, M.; Roberts, D.; Papadopoulos, N.

    2008-01-01

    By comparing two-dimensional (2D) chemical animations designed for computer's desktop with three-dimensional (3D) chemical animations designed for the full immersive virtual reality environment CAVE[TM] we studied how virtual reality environments could raise student's interest and motivation for learning. By using the 3ds max[TM], we can visualize…

  2. 3DIVS: 3-Dimensional Immersive Virtual Sculpting

    SciTech Connect

    Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E

    2001-10-03

    Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.

  3. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  4. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  5. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  6. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  7. Immersive virtual reality simulations in nursing education.

    PubMed

    Kilmon, Carol A; Brown, Leonard; Ghosh, Sumit; Mikitiuk, Artur

    2010-01-01

    This article explores immersive virtual reality as a potential educational strategy for nursing education and describes an immersive learning experience now being developed for nurses. This pioneering project is a virtual reality application targeting speed and accuracy of nurse response in emergency situations requiring cardiopulmonary resuscitation. Other potential uses and implications for the development of virtual reality learning programs are discussed. PMID:21086871

  8. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  9. Virtual reality 3D headset based on DMD light modulators

    SciTech Connect

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  10. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  11. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  12. Social Interaction Development through Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Beach, Jason; Wendt, Jeremy

    2014-01-01

    The purpose of this pilot study was to determine if participants could improve their social interaction skills by participating in a virtual immersive environment. The participants used a developing virtual reality head-mounted display to engage themselves in a fully-immersive environment. While in the environment, participants had an opportunity…

  13. Immersive video for virtual tourism

    NASA Astrophysics Data System (ADS)

    Hernandez, Luis A.; Taibo, Javier; Seoane, Antonio J.

    2001-11-01

    This paper describes a new panoramic, 360 degree(s) video system and its use in a real application for virtual tourism. The development of this system has required to design new hardware for multi-camera recording, and software for video processing in order to elaborate the panorama frames and to playback the resulting high resolution video footage on a regular PC. The system makes use of new VR display hardware, such as WindowVR, in order to make the view dependent on the viewer's spatial orientation and so enhance immersiveness. There are very few examples of similar technologies and the existing ones are extremely expensive and/or impossible to be implemented on personal computers with acceptable quality. The idea of the system starts from the concept of Panorama picture, developed in technologies such as QuickTimeVR. This idea is extended to the concept of panorama frame that leads to panorama video. However, many problems are to be solved to implement this simple scheme. Data acquisition involves simultaneously footage recording in every direction, and latter processing to convert every set of frames in a single high resolution panorama frame. Since there is no common hardware capable of 4096x512 video playback at 25 fps rate, it must be stripped in smaller pieces which the system must manage to get the right frames of the right parts as the user movement demands it. As the system must be immersive, the physical interface to watch the 360 degree(s) video is a WindowVR, that is, a flat screen with an orientation tracker that the user holds in his hands, moving it like if it were a virtual window through which the city and its activity is being shown.

  14. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    ERIC Educational Resources Information Center

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  15. Faculty Perceptions of Instruction in Collaborative Virtual Immersive Learning Environments in Higher Education

    ERIC Educational Resources Information Center

    Janson, Barbara

    2013-01-01

    Use of 3D (three-dimensional) avatars in a synchronous virtual world for educational purposes has only been adopted for about a decade. Universities are offering synchronous, avatar-based virtual courses for credit - within 3D worlds (Luo & Kemp, 2008). Faculty and students immerse themselves, via avatars, in virtual worlds and communicate…

  16. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this

  17. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  18. Learning in 3-D Virtual Worlds: Rethinking Media Literacy

    ERIC Educational Resources Information Center

    Qian, Yufeng

    2008-01-01

    3-D virtual worlds, as a new form of learning environments in the 21st century, hold great potential in education. Learning in such environments, however, demands a broader spectrum of literacy skills. This article identifies a new set of media literacy skills required in 3-D virtual learning environments by reviewing exemplary 3-D virtual…

  19. 3D Immersive Patient Simulators and Their Impact on Learning Success: A Thematic Review

    PubMed Central

    Wahba, Roger; Chang, De-Hua; Plum, Patrick; Hölscher, Arnulf H; Stippel, Dirk L

    2015-01-01

    Background Immersive patient simulators (IPSs) combine the simulation of virtual patients with a three-dimensional (3D) environment and, thus, allow an illusionary immersion into a synthetic world, similar to computer games. Playful learning in a 3D environment is motivating and allows repetitive training and internalization of medical workflows (ie, procedural knowledge) without compromising real patients. The impact of this innovative educational concept on learning success requires review of feasibility and validity. Objective It was the aim of this paper to conduct a survey of all immersive patient simulators currently available. In addition, we address the question of whether the use of these simulators has an impact on knowledge gain by summarizing the existing validation studies. Methods A systematic literature search via PubMed was performed using predefined inclusion criteria (ie, virtual worlds, focus on education of medical students, validation testing) to identify all available simulators. Validation testing was defined as the primary end point. Results There are currently 13 immersive patient simulators available. Of these, 9 are Web-based simulators and represent feasibility studies. None of these simulators are used routinely for student education. The workstation-based simulators are commercially driven and show a higher quality in terms of graphical quality and/or data content. Out of the studies, 1 showed a positive correlation between simulated content and real content (ie, content validity). There was a positive correlation between the outcome of simulator training and alternative training methods (ie, concordance validity), and a positive coherence between measured outcome and future professional attitude and performance (ie, predictive validity). Conclusions IPSs can promote learning and consolidation of procedural knowledge. The use of immersive patient simulators is still marginal, and technical and educational approaches are heterogeneous

  20. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  1. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  2. Sensorized Garment Augmented 3D Pervasive Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Gulrez, Tauseef; Tognetti, Alessandro; de Rossi, Danilo

    Virtual reality (VR) technology has matured to a point where humans can navigate in virtual scenes; however, providing them with a comfortable fully immersive role in VR remains a challenge. Currently available sensing solutions do not provide ease of deployment, particularly in the seated position due to sensor placement restrictions over the body, and optic-sensing requires a restricted indoor environment to track body movements. Here we present a 52-sensor laden garment interfaced with VR, which offers both portability and unencumbered user movement in a VR environment. This chapter addresses the systems engineering aspects of our pervasive computing solution of the interactive sensorized 3D VR and presents the initial results and future research directions. Participants navigated in a virtual art gallery using natural body movements that were detected by their wearable sensor shirt and then mapped the signals to electrical control signals responsible for VR scene navigation. The initial results are positive, and offer many opportunities for use in computationally intelligentman-machine multimedia control.

  3. A geoscience perspective on immersive 3D gridded data visualization

    NASA Astrophysics Data System (ADS)

    Billen, Magali I.; Kreylos, Oliver; Hamann, Bernd; Jadamec, Margarete A.; Kellogg, Louise H.; Staadt, Oliver; Sumner, Dawn Y.

    2008-09-01

    We describe visualization software, Visualizer, that was developed specifically for interactive, visual exploration in immersive virtual reality (VR) environments. Visualizer uses carefully optimized algorithms and data structures to support the high frame rates required for immersion and the real-time feedback required for interactivity. As an application developed for VR from the ground up, Visualizer realizes benefits that usually cannot be achieved by software initially developed for the desktop and later ported to VR. However, Visualizer can also be used on desktop systems (unix/linux-based operating systems including Mac OS X) with a similar level of real-time interactivity, bridging the "software gap" between desktop and VR that has been an obstacle for the adoption of VR methods in the Geosciences. While many of the capabilities of Visualizer are already available in other software packages used in a desktop environment, the features that distinguish Visualizer are: (1) Visualizer can be used in any VR environment including the desktop, GeoWall, or CAVE, (2) in non-desktop environments the user interacts with the data set directly using a wand or other input devices instead of working indirectly via dialog boxes or text input, (3) on the desktop, Visualizer provides real-time interaction with very large data sets that cannot easily be viewed or manipulated in other software packages. Three case studies are presented that illustrate the direct scientific benefits realized by analyzing data or simulation results with Visualizer in a VR environment. We also address some of the main obstacles to widespread use of VR environments in scientific research with a user study that shows Visualizer is easy to learn and to use in a VR environment and can be as effective on desktop systems as native desktop applications.

  4. A 3D Immersive Fault Visualizer and Editor

    NASA Astrophysics Data System (ADS)

    Yikilmaz, M. B.; van Aalsburg, J.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2007-12-01

    Digital fault models are an important resource for the study of earthquake dynamics, fault-earthquake interactions and seismicity. Once digitized these fault models can be used in Finite Element Model (FEM) programs or earthquake simulations such as Virtual California (VC). However, these models are often difficult to create, requiring a substantial amount of time to generate the fault topology and compute the properties of the individual segments. To aid in the construction of such models we have developed an immersive virtual reality (VR) application to visualize and edit fault models. Our program is designed to run in a CAVE (walk-in VR environment), but also works in a wide range of other environments, including desktop systems and GeoWalls. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). Immersive VR environments are ideal for visualizing and manipulating three- dimensional data sets. Our program allows users to create new models or modify existing ones; for example by repositioning individual fault-segments, by changing the dip angle, or by modifying (or assigning) the value of a property associated with a particular fault segment (i.e. slip rate). With the addition of high resolution Digital Elevation Models (DEM) the user can accurately add new segments to an existing model or create a fault model entirely from scratch. Interactively created or modified models can be written to XML files at any time; from there the data may easily be converted into various formats required by the analysis software or simulation. We believe that the ease of interaction provided by VR technology is ideally suited to the problem of creating and editing digital fault models. Our software provides the user with an intuitive environment for visualizing and editing fault model data. This translates not only into less time spent creating fault models, but also enables the researcher to

  5. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  6. Design and Implementation of a 3D Multi-User Virtual World for Language Learning

    ERIC Educational Resources Information Center

    Ibanez, Maria Blanca; Garcia, Jose Jesus; Galan, Sergio; Maroto, David; Morillo, Diego; Kloos, Carlos Delgado

    2011-01-01

    The best way to learn is by having a good teacher and the best language learning takes place when the learner is immersed in an environment where the language is natively spoken. 3D multi-user virtual worlds have been claimed to be useful for learning, and the field of exploiting them for education is becoming more and more active thanks to the…

  7. Learning in 3D Virtual Environments: Collaboration and Knowledge Spirals

    ERIC Educational Resources Information Center

    Burton, Brian G.; Martin, Barbara N.

    2010-01-01

    The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…

  8. Modulation of cortical activity in 2D versus 3D virtual reality environments: an EEG study.

    PubMed

    Slobounov, Semyon M; Ray, William; Johnson, Brian; Slobounov, Elena; Newell, Karl M

    2015-03-01

    There is a growing empirical evidence that virtual reality (VR) is valuable for education, training, entertaining and medical rehabilitation due to its capacity to represent real-life events and situations. However, the neural mechanisms underlying behavioral confounds in VR environments are still poorly understood. In two experiments, we examined the effect of fully immersive 3D stereoscopic presentations and less immersive 2D VR environments on brain functions and behavioral outcomes. In Experiment 1 we examined behavioral and neural underpinnings of spatial navigation tasks using electroencephalography (EEG). In Experiment 2, we examined EEG correlates of postural stability and balance. Our major findings showed that fully immersive 3D VR induced a higher subjective sense of presence along with enhanced success rate of spatial navigation compared to 2D. In Experiment 1 power of frontal midline EEG (FM-theta) was significantly higher during the encoding phase of route presentation in the 3D VR. In Experiment 2, the 3D VR resulted in greater postural instability and modulation of EEG patterns as a function of 3D versus 2D environments. The findings support the inference that the fully immersive 3D enriched-environment requires allocation of more brain and sensory resources for cognitive/motor control during both tasks than 2D presentations. This is further evidence that 3D VR tasks using EEG may be a promising approach for performance enhancement and potential applications in clinical/rehabilitation settings. PMID:25448267

  9. Contextual EFL Learning in a 3D Virtual Environment

    ERIC Educational Resources Information Center

    Lan, Yu-Ju

    2015-01-01

    The purposes of the current study are to develop virtually immersive EFL learning contexts for EFL learners in Taiwan to pre- and review English materials beyond the regular English class schedule. A 2-iteration action research lasting for one semester was conducted to evaluate the effects of virtual contexts on learners' EFL learning. 132…

  10. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  11. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  12. Using Immersive Virtual Environments for Certification

    NASA Technical Reports Server (NTRS)

    Lutz, R.; Cruz-Neira, C.

    1998-01-01

    Immersive virtual environments (VEs) technology has matured to the point where it can be utilized as a scientific and engineering problem solving tool. In particular, VEs are starting to be used to design and evaluate safety-critical systems that involve human operators, such as flight and driving simulators, complex machinery training, and emergency rescue strategies.

  13. ESL Teacher Training in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Kozlova, Iryna; Priven, Dmitri

    2015-01-01

    Although language learning in 3D Virtual Worlds (VWs) has become a focus of recent research, little is known about the knowledge and skills teachers need to acquire to provide effective task-based instruction in 3D VWs and the type of teacher training that best prepares instructors for such an endeavor. This study employs a situated learning…

  14. What Are the Learning Affordances of 3-D Virtual Environments?

    ERIC Educational Resources Information Center

    Dalgarno, Barney; Lee, Mark J. W.

    2010-01-01

    This article explores the potential learning benefits of three-dimensional (3-D) virtual learning environments (VLEs). Drawing on published research spanning two decades, it identifies a set of unique characteristics of 3-D VLEs, which includes aspects of their representational fidelity and aspects of the learner-computer interactivity they…

  15. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  16. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  17. Dynamic 3D echocardiography in virtual reality

    PubMed Central

    van den Bosch, Annemien E; Koning, Anton HJ; Meijboom, Folkert J; McGhie, Jackie S; Simoons, Maarten L; van der Spek, Peter J; Bogers, Ad JJC

    2005-01-01

    Background This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. Methods Three-dimensional echocardiographic data sets from 2 normal subjects and from 4 patients with a mitral valve pathological condition were included in the study. The three-dimensional data sets were acquired with the Philips Sonos 7500 echo-system and transferred to the BARCO (Barco N.V., Kortrijk, Belgium) I-space. Ten independent observers assessed the 6 three-dimensional data sets with and without mitral valve pathology. After 10 minutes' instruction in the I-Space, all of the observers could use the virtual pointer that is necessary to create cut planes in the hologram. Results The 10 independent observers correctly assessed the normal and pathological mitral valve in the holograms (analysis time approximately 10 minutes). Conclusion this report shows that dynamic holographic imaging of three-dimensional echocardiographic data is feasible. However, the applicability and use-fullness of this technology in clinical practice is still limited. PMID:16375768

  18. Game-Like Language Learning in 3-D Virtual Environments

    ERIC Educational Resources Information Center

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  19. Improvements in education in pathology: virtual 3D specimens.

    PubMed

    Kalinski, Thomas; Zwönitzer, Ralf; Jonczyk-Weber, Thomas; Hofmann, Harald; Bernarding, Johannes; Roessner, Albert

    2009-01-01

    Virtual three-dimensional (3D) specimens correspond to 3D visualizations of real pathological specimens on a computer display. We describe a simple method for the digitalization of such specimens from high-quality digital images. The images were taken during a whole rotation of a specimen, and merged together into a JPEG2000 multi-document file. The files were made available in the internet (http://patho.med.uni-magdeburg.de/research.shtml) and obtained very positive ratings by medical students. Virtual 3D specimens expand the application of digital techniques in pathology, and will contribute significantly to the successful introduction of knowledge databases and electronic learning platforms. PMID:19457621

  20. Foreign language learning in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Sheldon, Lee; Si, Mei; Hand, Anton

    2012-03-01

    Virtual reality has long been used for training simulations in fields from medicine to welding to vehicular operation, but simulations involving more complex cognitive skills present new design challenges. Foreign language learning, for example, is increasingly vital in the global economy, but computer-assisted education is still in its early stages. Immersive virtual reality is a promising avenue for language learning as a way of dynamically creating believable scenes for conversational training and role-play simulation. Visual immersion alone, however, only provides a starting point. We suggest that the addition of social interactions and motivated engagement through narrative gameplay can lead to truly effective language learning in virtual environments. In this paper, we describe the development of a novel application for teaching Mandarin using CAVE-like VR, physical props, human actors and intelligent virtual agents, all within a semester-long multiplayer mystery game. Students travel (virtually) to China on a class field trip, which soon becomes complicated with intrigue and mystery surrounding the lost manuscript of an early Chinese literary classic. Virtual reality environments such as the Forbidden City and a Beijing teahouse provide the setting for learning language, cultural traditions, and social customs, as well as the discovery of clues through conversation in Mandarin with characters in the game.

  1. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    ERIC Educational Resources Information Center

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  2. Identifying Virtual 3D Geometric Shapes with a Vibrotactile Glove.

    PubMed

    Martínez, Jonatan; García, Arturo; Oliver, Miguel; Molina, José Pascual; González, Pascual

    2016-01-01

    The emergence of off-screen interaction devices is bringing the field of virtual reality to a broad range of applications where virtual objects can be manipulated without the use of traditional peripherals. However, to facilitate object interaction, other stimuli such as haptic feedback are necessary to improve the user experience. To enable the identification of virtual 3D objects without visual feedback, a haptic display based on a vibrotactile glove and multiple points of contact gives users an enhanced sensation of touching a virtual object with their hands. Experimental results demonstrate the capacity of this technology in practical applications. PMID:25137722

  3. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  4. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  5. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research. PMID:24804488

  6. Web-based Three-dimensional Virtual Body Structures: W3D-VBS

    PubMed Central

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user’s progress through evaluation tools helps customize lesson plans. A self-guided “virtual tour” of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  7. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    PubMed

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it. PMID:12223495

  8. Novel interactive virtual showcase based on 3D multitouch technology

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  9. Coloring 3D line fields using Boy's real projective plane immersion.

    PubMed

    Demiralp, Cağatay; Hughes, John F; Laidlaw, David H

    2009-01-01

    We introduce a new method for coloring 3D line fields and show results from its application in visualizing orientation in DTI brain data sets. The method uses Boy's surface, an immersion of RP2 in 3D. This coloring method is smooth and one-to-one except on a set of measure zero, the double curve of Boy's surface. PMID:19834221

  10. Learning Relative Motion Concepts in Immersive and Non-Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-01-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop…

  11. Consultation virtual collaborative environment for 3D medicine.

    PubMed

    Krsek, Premysl; Spanel, Michal; Svub, Miroslav; Stancl, Vít; Siler, Ondrej; Sára, Vítezslav

    2008-01-01

    This article focuses on the problems of consultation virtual collaborative environment, which is designed to support 3D medical applications. This system allows loading CT/MR data from PACS system, segmentation and 3D models of tissues. It allows distant 3D consultations of the data between technicians and surgeons. System is designed as three-layer client-server architecture. Communication between clients and server is done via HTTP/HTTPS protocol. Results and tests have confirmed, that today's standard network latency and dataflow do not affect the usability of our system. PMID:19162770

  12. Calibrating camera and projector arrays for immersive 3D display

    NASA Astrophysics Data System (ADS)

    Baker, Harlyn; Li, Zeyu; Papadas, Constantin

    2009-02-01

    Advances in building high-performance camera arrays [1, 12] have opened the opportunity - and challenge - of using these devices for autostereoscopic display of live 3D content. Appropriate autostereo display requires calibration of these camera elements and those of the display facility for accurate placement (and perhaps resampling) of the acquired video stream. We present progress in exploiting a new approach to this calibration that capitalizes on high quality homographies between pairs of imagers to develop a global optimal solution delivering epipoles and fundamental matrices simultaneously for the entire system [2]. Adjustment of the determined camera models to deliver minimal vertical misalignment in an epipolar sense is used to permit ganged rectification of the separate streams for transitive positioning in the visual field. Individual homographies [6] are obtained for a projector array that presents the video on a holographically-diffused retroreflective surface for participant autostereo viewing. The camera model adjustment means vertical epipolar disparities of the captured signal are minimized, and the projector calibration means the display will retain these alignments despite projector pose variations. The projector calibration also permits arbitrary alignment shifts to accommodate focus-of-attention vengeance, should that information be available.

  13. The SEE Experience: Edutainment in 3D Virtual Worlds.

    ERIC Educational Resources Information Center

    Di Blas, Nicoletta; Paolini, Paolo; Hazan, Susan

    Shared virtual worlds are innovative applications where several users, represented by Avatars, simultaneously access via Internet a 3D space. Users cooperate through interaction with the environment and with each other, manipulating objects and chatting as they go. Apart from in the well documented online action games industry, now often played…

  14. Measuring Knowledge Acquisition in 3D Virtual Learning Environments.

    PubMed

    Nunes, Eunice P dos Santos; Roque, Licínio G; Nunes, Fatima de Lourdes dos Santos

    2016-01-01

    Virtual environments can contribute to the effective learning of various subjects for people of all ages. Consequently, they assist in reducing the cost of maintaining physical structures of teaching, such as laboratories and classrooms. However, the measurement of how learners acquire knowledge in such environments is still incipient in the literature. This article presents a method to evaluate the knowledge acquisition in 3D virtual learning environments (3D VLEs) by using the learner's interactions in the VLE. Three experiments were conducted that demonstrate the viability of using this method and its computational implementation. The results suggest that it is possible to automatically assess learning in predetermined contexts and that some types of user interactions in 3D VLEs are correlated with the user's learning differential. PMID:26915117

  15. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  16. Virtual view adaptation for 3D multiview video streaming

    NASA Astrophysics Data System (ADS)

    Petrovic, Goran; Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Virtual views in 3D-TV and multi-view video systems are reconstructed images of the scene generated synthetically from the original views. In this paper, we analyze the performance of streaming virtual views over IP-networks with a limited and time-varying available bandwidth. We show that the average video quality perceived by the user can be improved with an adaptive streaming strategy aiming at maximizing the average video quality. Our adaptive 3D multi-view streaming can provide a quality improvement of 2 dB on the average - over non-adaptive streaming. We demonstrate that an optimized virtual view adaptation algorithm needs to be view-dependent and achieve an improvement of up to 0.7 dB. We analyze our adaptation strategies under dynamic available bandwidth in the network.

  17. Re-Dimensional Thinking in Earth Science: From 3-D Virtual Reality Panoramas to 2-D Contour Maps

    ERIC Educational Resources Information Center

    Park, John; Carter, Glenda; Butler, Susan; Slykhuis, David; Reid-Griffin, Angelia

    2008-01-01

    This study examines the relationship of gender and spatial perception on student interactivity with contour maps and non-immersive virtual reality. Eighteen eighth-grade students elected to participate in a six-week activity-based course called "3-D GeoMapping." The course included nine days of activities related to topographic mapping. At the end…

  18. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  19. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  20. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  1. Immersive Training Systems: Virtual Reality and Education and Training.

    ERIC Educational Resources Information Center

    Psotka, Joseph

    1995-01-01

    Describes virtual reality (VR) technology and VR research on education and training. Focuses on immersion as the key added value of VR, analyzes cognitive variables connected to immersion, how it is generated in synthetic environments and its benefits. Discusses value of tracked, immersive visual displays over nonimmersive simulations. Contains 78…

  2. Machinima Interventions: Innovative Approaches to Immersive Virtual World Curriculum Integration

    ERIC Educational Resources Information Center

    Middleton, Andrew John; Mather, Richard

    2008-01-01

    The educational value of Immersive Virtual Worlds (IVWs) seems to be in their social immersive qualities and as an accessible simulation technology. In contrast to these synchronous applications this paper discusses the use of educational machinima developed in IVW virtual film sets. It also introduces the concept of media intervention, proposing…

  3. Virtually Ostracized: Studying Ostracism in Immersive Virtual Environments

    PubMed Central

    Wesselmann, Eric D.; Law, Alvin Ty; Williams, Kipling D.

    2012-01-01

    Abstract Electronic-based communication (such as Immersive Virtual Environments; IVEs) may offer new ways of satisfying the need for social connection, but they also provide ways this need can be thwarted. Ostracism, being ignored and excluded, is a common social experience that threatens fundamental human needs (i.e., belonging, control, self-esteem, and meaningful existence). Previous ostracism research has made use of a variety of paradigms, including minimal electronic-based interactions (e.g., Cyberball) and communication (e.g., chatrooms and Short Message Services). These paradigms, however, lack the mundane realism that many IVEs now offer. Further, IVE paradigms designed to measure ostracism may allow researchers to test more nuanced hypotheses about the effects of ostracism. We created an IVE in which ostracism could be manipulated experimentally, emulating a previously validated minimal ostracism paradigm. We found that participants who were ostracized in this IVE experienced the same negative effects demonstrated in other ostracism paradigms, providing, to our knowledge, the first evidence of the negative effects of ostracism in virtual environments. Though further research directly exploring these effects in online virtual environments is needed, this research suggests that individuals encountering ostracism in other virtual environments (such as massively multiplayer online role playing games; MMORPGs) may experience negative effects similar to those of being ostracized in real life. This possibility may have serious implications for individuals who are marginalized in their real life and turn to IVEs to satisfy their need for social connection. PMID:22897472

  4. Gravity and spatial orientation in virtual 3D-mazes.

    PubMed

    Vidal, Manuel; Lipshits, Mark; McIntyre, Joseph; Berthoz, Alain

    2003-01-01

    In order to bring new insights into the processing of 3D spatial information, we conducted experiments on the capacity of human subjects to memorize 3D-structured environments, such as buildings with several floors or the potentially complex 3D structure of an orbital space station. We had subjects move passively in one of two different exploration modes, through a visual virtual environment that consisted of a series of connected tunnels. In upright displacement, self-rotation when going around corners in the tunnels was limited to yaw rotations. For horizontal translations, subjects faced forward in the direction of motion. When moving up or down through vertical segments of the 3D tunnels, however, subjects facing the tunnel wall, remaining upright as if moving up and down in a glass elevator. In the unconstrained displacement mode, subjects would appear to climb or dive face-forward when moving vertically; thus, in this mode subjects could experience visual flow consistent with rotations about any of the 3 canonical axes. In a previous experiment, subjects were asked to determine whether a static, outside view of a test tunnel corresponded or not to the tunnel through which they had just passed. Results showed that performance was better on this task for the upright than for the unconstrained displacement mode; i.e. when subjects remained "upright" with respect to the virtual environment as defined by subject's posture in the first segment. This effect suggests that gravity may provide a key reference frame used in the shift between egocentric and allocentric representations of the 3D virtual world. To check whether it is the polarizing effects of gravity that leads to the favoring of the upright displacement mode, the experimental paradigm was adapted for orbital flight and performed by cosmonauts onboard the International Space Station. For these flight experiments the previous recognition task was replaced by a computerized reconstruction task, which proved

  5. Heard on The Street: GIS-Guided Immersive 3D Models as an Augmented Reality for Team Collaboration

    NASA Astrophysics Data System (ADS)

    Quinn, B. B.

    2007-12-01

    Grid computing can be configured to run physics simulations for spatially contiguous virtual 3D model spaces. Each cell is run by a single processor core simulating 1/16 square kilometer of surface and can contain up to 15,000 objects. In this work, a model of one urban block was constructed in the commercial 3D online digital world Second Life http://secondlife.com to prove concept that GIS data can guide the build of an accurate in-world model. Second Life simulators support terrain modeling at two-meter grid intervals. Access to the Second Life grid is worldwide if connections to the US-based servers are possible. This immersive 3D model allows visitors to explore the space at will, with physics simulated for object collisions, gravity, and wind forces about 40 times per second. Visitors view this world as renderings by their 3-D display card of graphic objects and raster textures that are streamed from the simulator grid to the Second Life client, based on that client's instantaneous field of view. Visitors to immersive 3D models experience a virtual world that engages their innate abilities to relate to the real immersive 3D world in which humans have evolved. These abilities enable far more complex and dynamic 3D environments to be quickly and accurately comprehended by more visitors than most non-immersive 3D environments. Objects of interest at ground surface and below can be walked around, possibly entered, viewed at arm's length or flown over at 500 meters above. Videos of renderings have been recorded (as machinima) to share a visit as part of public presentations. Key to this experience is that dozens of simultaneous visitors can experience the model at the same time, each exploring it at will and seeing (if not colliding with) one another---like twenty geology students on a virtual outcrop, where each student might fly if they chose to. This work modeled the downtown Berkeley, CA, transit station in the Second Life region "Gualala" near [170, 35, 35

  6. From Multi-User Virtual Environment to 3D Virtual Learning Environment

    ERIC Educational Resources Information Center

    Livingstone, Daniel; Kemp, Jeremy; Edgar, Edmund

    2008-01-01

    While digital virtual worlds have been used in education for a number of years, advances in the capabilities and spread of technology have fed a recent boom in interest in massively multi-user 3D virtual worlds for entertainment, and this in turn has led to a surge of interest in their educational applications. In this paper we briefly review the…

  7. The virtual reality 3D city of Ningbo

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Wu, Dun

    2009-09-01

    In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city technological framework and well supported by computing advances, space technologies, and commercial innovations. It has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and Service-Oriented Architecture-based system. The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information platform, and to achieve that heterogeneous city information resources share this one single platform.

  8. The virtual reality 3D city of Ningbo

    NASA Astrophysics Data System (ADS)

    Chen, Weimin; Wu, Dun

    2010-11-01

    In 2005, Ningbo Design Research Institute of Mapping & Surveying started the development of concepts and an implementation of Virtual Reality Ningbo System (VRNS). VRNS is being developed under the digital city technological framework and well supported by computing advances, space technologies, and commercial innovations. It has become the best solution for integrating, managing, presenting, and distributing complex city information. VRNS is not only a 3D-GIS launch project but also a technology innovation. The traditional domain of surveying and mapping has changed greatly in Ningbo. Geo-information systems are developing towards a more reality-, three dimension- and Service-Oriented Architecture-based system. The VRNS uses technology such as 3D modeling, user interface design, view scene modeling, real-time rendering and interactive roaming under a virtual environment. Two applications of VRNS already being used are for city planning and high-rise buildings' security management. The final purpose is to develop VRNS into a powerful public information platform, and to achieve that heterogeneous city information resources share this one single platform.

  9. 3D Reconstruction of virtual colon structures from colonoscopy images.

    PubMed

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  10. Immersive virtual environments in cue exposure.

    PubMed

    Kuntze, M F; Stoermer, R; Mager, R; Roessler, A; Mueller-Spahn, F; Bullinger, A H

    2001-08-01

    Cue reactivity to drug-related stimuli is a frequently observed phenomenon in drug addiction. Cue reactivity refers to a classical conditioned response pattern that occurs when an addicted subject is exposed to drug-related stimuli. This response consists of physiological and cognitive reactions. Craving, a subjective desire to use the drug of choice, is believed to play an important role in the occurrence of relapse in the natural setting. Besides craving, other subjective cue-elicited reactions have been reported, including withdrawal symptoms, drug-agonistic effects, and mood swings. Physiological reactions that have been investigated include skin conductance, heart rate, salivation, and body temperature. Conditioned reactivity to cues is an important factor in addiction to alcohol, nicotine, opiates, and cocaine. Cue exposure treatment (CET) refers to a manualized, repeated exposure to drug-related cues, aimed at the reduction of cue reactivity by extinction. In CET, different stimuli are presented, for example, slides, video tapes, pictures, or paraphernalia in nonrealistic, experimental settings. Most often assessments consist in subjective ratings by craving scales. Our pilot study will show that immersive virtual reality (IVR) is as good or even better in eliciting subjective and physiological craving symptoms as classical devices. PMID:11708729

  11. The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Lawless-Reljic, Sabine Karine

    2010-01-01

    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…

  12. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  13. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    ERIC Educational Resources Information Center

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  14. The ALIVE Project: Astronomy Learning in Immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.; Denn, G.

    2008-06-01

    The Astronomy Learning in Immersive Virtual Environments (ALIVE) project seeks to discover learning modes and optimal teaching strategies using immersive virtual environments (VEs). VEs are computer-generated, three-dimensional environments that can be navigated to provide multiple perspectives. Immersive VEs provide the additional benefit of surrounding a viewer with the simulated reality. ALIVE evaluates the incorporation of an interactive, real-time ``virtual universe'' into formal college astronomy education. In the experiment, pre-course, post-course, and curriculum tests will be used to determine the efficacy of immersive visualizations presented in a digital planetarium versus the same visual simulations in the non-immersive setting of a normal classroom, as well as a control case using traditional classroom multimedia. To normalize for inter-instructor variability, each ALIVE instructor will teach at least one of each class in each of the three test groups.

  15. Second Life, a 3-D Animated Virtual World: An Alternative Platform for (Art) Education

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2011-01-01

    3-D animated virtual worlds are no longer only for gaming. With the advance of technology, animated virtual worlds not only are found on every computer, but also connect users with the internet. Today, virtual worlds are created not only by companies, but also through the collaboration of users. Online 3-D animated virtual worlds provide a new…

  16. Virtual Reality--Learning by Immersion.

    ERIC Educational Resources Information Center

    Dunning, Jeremy

    1998-01-01

    Discusses the use of virtual reality in educational software. Topics include CAVE (Computer-Assisted Virtual Environments); cost-effective virtual environment tools including QTVR (Quick Time Virtual Reality); interactive exercises; educational criteria for technology-based educational tools; and examples of screen displays. (LRW)

  17. Acoustic simulation in realistic 3D virtual scenes

    NASA Astrophysics Data System (ADS)

    Gozard, Patrick; Le Goff, Alain; Naz, Pierre; Cathala, Thierry; Latger, Jean

    2003-09-01

    The simulation workshop CHORALE developed in collaboration with OKTAL SE company for the French MoD is used by government services and industrial companies for weapon system validation and qualification trials in the infrared domain. The main operational reference for CHORALE is the assessment of the infrared guidance system of the Storm Shadow missile French version, called Scalp. The use of CHORALE workshop is now extended to the acoustic domain. The main objective is the simulation of the detection of moving vehicles in realistic 3D virtual scenes. This article briefly describes the acoustic model in CHORALE. The 3D scene is described by a set of polygons. Each polygon is characterized by its acoustic resistivity or its complex impedance. Sound sources are associated with moving vehicles and are characterized by their spectra and directivities. A microphone sensor is defined by its position, its frequency band and its sensitivity. The purpose of the acoustic simulation is to calculate the incoming acoustic pressure on microphone sensors. CHORALE is based on a generic ray tracing kernel. This kernel possesses original capabilities: computation time is nearly independent on the scene complexity, especially the number of polygons, databases are enhanced with precise physical data, special mechanisms of antialiasing have been developed that enable to manage very accurate details. The ray tracer takes into account the wave geometrical divergence and the atmospheric transmission. The sound wave refraction is simulated and rays cast in the 3D scene are curved according to air temperature gradient. Finally, sound diffraction by edges (hill, wall,...) is also taken into account.

  18. 3D virtual colonoscopy with real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.

    2000-04-01

    In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.

  19. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  20. Student Responses to Their Immersion in a Virtual Environment.

    ERIC Educational Resources Information Center

    Taylor, Wayne

    Undertaken in conjunction with a larger study that investigated the educational efficacy of students building their own virtual worlds, this study measures the reactions of students in grades 4-12 to the experience of being immersed in virtual reality (VR). The study investigated the sense of "presence" experienced by the students, the extent to…

  1. The Components of Effective Teacher Training in the Use of Three-Dimensional Immersive Virtual Worlds for Learning and Instruction Purposes: A Literature Review

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin

    2014-01-01

    The overarching question that guides this review is to identify the key components of effective teacher training in virtual schooling, with a focus on three-dimensional (3D) immersive virtual worlds (IVWs). The process of identifying the essential components of effective teacher training in the use of 3D IVWs will be described step-by-step. First,…

  2. Participatory Gis: Experimentations for a 3d Social Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Zamboni, G.

    2013-08-01

    The dawn of GeoWeb 2.0, the geographic extension of Web 2.0, has opened new possibilities in terms of online dissemination and sharing of geospatial contents, thus laying the foundations for a fruitful development of Participatory GIS (PGIS). The purpose of the study is to investigate the extension of PGIS applications, which are quite mature in the traditional bi-dimensional framework, up to the third dimension. More in detail, the system should couple a powerful 3D visualization with an increase of public participation by means of a tool allowing data collecting from mobile devices (e.g. smartphones and tablets). The PGIS application, built using the open source NASA World Wind virtual globe, is focussed on the cultural and tourism heritage of Como city, located in Northern Italy. An authentication mechanism was implemented, which allows users to create and manage customized projects through cartographic mash-ups of Web Map Service (WMS) layers. Saved projects populate a catalogue which is available to the entire community. Together with historical maps and the current cartography of the city, the system is also able to manage geo-tagged multimedia data, which come from user field-surveys performed through mobile devices and report POIs (Points Of Interest). Each logged user can then contribute to POIs characterization by adding textual and multimedia information (e.g. images, audios and videos) directly on the globe. All in all, the resulting application allows users to create and share contributions as it usually happens on social platforms, additionally providing a realistic 3D representation enhancing the expressive power of data.

  3. Going Virtual… or Not: Development and Testing of a 3D Virtual Astronomy Environment

    NASA Astrophysics Data System (ADS)

    Ruzhitskaya, L.; Speck, A.; Ding, N.; Baldridge, S.; Witzig, S.; Laffey, J.

    2013-04-01

    We present our preliminary results of a pilot study of students' knowledge transfer of an astronomy concept into a new environment. We also share our discoveries on what aspects of a 3D environment students consider being motivational and discouraging for their learning. This study was conducted among 64 non-science major students enrolled in an astronomy laboratory course. During the course, students learned the concept and applications of Kepler's laws using a 2D interactive environment. Later in the semester, the students were placed in a 3D environment in which they were asked to conduct observations and to answers a set of questions pertaining to the Kepler's laws of planetary motion. In this study, we were interested in observing scrutinizing and assessing students' behavior: from choices that they made while creating their avatars (virtual representations) to tools they choose to use, to their navigational patterns, to their levels of discourse in the environment. These helped us to identify what features of the 3D environment our participants found to be helpful and interesting and what tools created unnecessary clutter and distraction. The students' social behavior patterns in the virtual environment together with their answers to the questions helped us to determine how well they understood Kepler's laws, how well they could transfer the concepts to a new situation, and at what point a motivational tool such as a 3D environment becomes a disruption to the constructive learning. Our founding confirmed that students construct deeper knowledge of a concept when they are fully immersed in the environment.

  4. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  5. Performance of dental students versus prosthodontics residents on a 3D immersive haptic simulator.

    PubMed

    Eve, Elizabeth J; Koo, Samuel; Alshihri, Abdulmonem A; Cormier, Jeremy; Kozhenikov, Maria; Donoff, R Bruce; Karimbux, Nadeem Y

    2014-04-01

    This study evaluated the performance of dental students versus prosthodontics residents on a simulated caries removal exercise using a newly designed, 3D immersive haptic simulator. The intent of this study was to provide an initial assessment of the simulator's construct validity, which in the context of this experiment was defined as its ability to detect a statistically significant performance difference between novice dental students (n=12) and experienced prosthodontics residents (n=14). Both groups received equivalent calibration training on the simulator and repeated the same caries removal exercise three times. Novice and experienced subjects' average performance differed significantly on the caries removal exercise with respect to the percentage of carious lesion removed and volume of surrounding sound tooth structure removed (p<0.05). Experienced subjects removed a greater portion of the carious lesion, but also a greater volume of the surrounding tooth structure. Efficiency, defined as percentage of carious lesion removed over drilling time, improved significantly over the course of the experiment for both novice and experienced subjects (p<0.001). Within the limitations of this study, experienced subjects removed a greater portion of carious lesion on a 3D immersive haptic simulator. These results are a first step in establishing the validity of this device. PMID:24706694

  6. The cranial nerve skywalk: A 3D tutorial of cranial nerves in a virtual platform.

    PubMed

    Richardson-Hatcher, April; Hazzard, Matthew; Ramirez-Yanez, German

    2014-01-01

    Visualization of the complex courses of the cranial nerves by students in the health-related professions is challenging through either diagrams in books or plastic models in the gross laboratory. Furthermore, dissection of the cranial nerves in the gross laboratory is an extremely meticulous task. Teaching and learning the cranial nerve pathways is difficult using two-dimensional (2D) illustrations alone. Three-dimensional (3D) models aid the teacher in describing intricate and complex anatomical structures and help students visualize them. The study of the cranial nerves can be supplemented with 3D, which permits the students to fully visualize their distribution within the craniofacial complex. This article describes the construction and usage of a virtual anatomy platform in Second Life™, which contains 3D models of the cranial nerves III, V, VII, and IX. The Cranial Nerve Skywalk features select cranial nerves and the associated autonomic pathways in an immersive online environment. This teaching supplement was introduced to groups of pre-healthcare professional students in gross anatomy courses at both institutions and student feedback is included. PMID:24678025

  7. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity.

    PubMed

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-01-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies. PMID:26677949

  8. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity

    NASA Astrophysics Data System (ADS)

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-12-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies.

  9. Liquid immersion thermal crosslinking of 3D polymer nanopatterns for direct carbonisation with high structural integrity

    PubMed Central

    Kang, Da-Young; Kim, Cheolho; Park, Gyurim; Moon, Jun Hyuk

    2015-01-01

    The direct pyrolytic carbonisation of polymer patterns has attracted interest for its use in obtaining carbon materials. In the case of carbonisation of nanopatterned polymers, the polymer flow and subsequent pattern change may occur in order to relieve their high surface energies. Here, we demonstrated that liquid immersion thermal crosslinking of polymer nanopatterns effectively enhanced the thermal resistance and maintained the structure integrity during the heat treatment. We employed the liquid immersion thermal crosslinking for 3D porous SU8 photoresist nanopatterns and successfully converted them to carbon nanopatterns while maintaining their porous features. The thermal crosslinking reaction and carbonisation of SU8 nanopatterns were characterised. The micro-crystallinity of the SU8-derived carbon nanopatterns was also characterised. The liquid immersion heat treatment can be extended to the carbonisation of various polymer or photoresist nanopatterns and also provide a facile way to control the surface energy of polymer nanopatterns for various purposes, for example, to block copolymer or surfactant self-assemblies. PMID:26677949

  10. EXPLORING ENVIRONMENTAL DATA IN A HIGHLY IMMERSIVE VIRTUAL REALITY ENVIRONMENT

    EPA Science Inventory

    Geography inherently fills a 3D space and yet we struggle with displaying geography using, primaarily, 2D display devices. Virtual environments offer a more realistically-dimensioned display space and this is being realized in the expanding area of research on 3D Geographic Infor...

  11. Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments

    NASA Astrophysics Data System (ADS)

    Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria

    2013-12-01

    The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.

  12. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  13. iVirtualWorld: A Domain-Oriented End-User Development Environment for Building 3D Virtual Chemistry Experiments

    ERIC Educational Resources Information Center

    Zhong, Ying

    2013-01-01

    Virtual worlds are well-suited for building virtual laboratories for educational purposes to complement hands-on physical laboratories. However, educators may face technical challenges because developing virtual worlds requires skills in programming and 3D design. Current virtual world building tools are developed for users who have programming…

  14. Integration of the virtual 3D model of a control system with the virtual controller

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  15. CaveCAD: a tool for architectural design in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo

    2014-02-01

    Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.

  16. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  17. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies.

    PubMed

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2008-08-10

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782-1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken's acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI

  18. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies

    PubMed Central

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2010-01-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the

  19. Immersive Virtual Worlds in University-Level Human Geography Courses

    ERIC Educational Resources Information Center

    Dittmer, Jason

    2010-01-01

    This paper addresses the potential for increased deployment of immersive virtual worlds in higher geographic education. An account of current practice regarding popular culture in the geography classroom is offered, focusing on the objectification of popular culture rather than its constitutive role vis-a-vis place. Current e-learning practice is…

  20. Situating Pedagogies, Positions and Practices in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi; Gourlay, Lesley; Tombs, Cathy; Steils, Nicole; Tombs, Gemma; Mawer, Matt

    2010-01-01

    Background: The literature on immersive virtual worlds and e-learning to date largely indicates that technology has led the pedagogy. Although rationales for implementing e-learning have included flexibility of provision and supporting diversity, none of these recommendations has helped to provide strong pedagogical location. Furthermore, there is…

  1. Intelligent Tutors in Immersive Virtual Environments

    ERIC Educational Resources Information Center

    Yan, Peng; Slator, Brian M.; Vender, Bradley; Jin, Wei; Kariluoma, Matti; Borchert, Otto; Hokanson, Guy; Aggarwal, Vaibhav; Cosmano, Bob; Cox, Kathleen T.; Pilch, André; Marry, Andrew

    2013-01-01

    Research into virtual role-based learning has progressed over the past decade. Modern issues include gauging the difficulty of designing a goal system capable of meeting the requirements of students with different knowledge levels, and the reasonability and possibility of taking advantage of the well-designed formula and techniques served in other…

  2. The Virtual Radiopharmacy Laboratory: A 3-D Simulation for Distance Learning

    ERIC Educational Resources Information Center

    Alexiou, Antonios; Bouras, Christos; Giannaka, Eri; Kapoulas, Vaggelis; Nani, Maria; Tsiatsos, Thrasivoulos

    2004-01-01

    This article presents Virtual Radiopharmacy Laboratory (VR LAB), a virtual laboratory accessible through the Internet. VR LAB is designed and implemented in the framework of the VirRAD European project. This laboratory represents a 3D simulation of a radio-pharmacy laboratory, where learners, represented by 3D avatars, can experiment on…

  3. 3D Inhabited Virtual Worlds: Interactivity and Interaction between Avatars, Autonomous Agents, and Users.

    ERIC Educational Resources Information Center

    Jensen, Jens F.

    This paper addresses some of the central questions currently related to 3-Dimensional Inhabited Virtual Worlds (3D-IVWs), their virtual interactions, and communication, drawing from the theory and methodology of sociology, interaction analysis, interpersonal communication, semiotics, cultural studies, and media studies. First, 3D-IVWs--seen as a…

  4. Issues and Challenges of Teaching and Learning in 3D Virtual Worlds: Real Life Case Studies

    ERIC Educational Resources Information Center

    Pfeil, Ulrike; Ang, Chee Siang; Zaphiris, Panayiotis

    2009-01-01

    We aimed to study the characteristics and usage patterns of 3D virtual worlds in the context of teaching and learning. To achieve this, we organised a full-day workshop to explore, discuss and investigate the educational use of 3D virtual worlds. Thirty participants took part in the workshop. All conversations were recorded and transcribed for…

  5. Interaction Design and Usability of Learning Spaces in 3D Multi-user Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Minocha, Shailey; Reeves, Ahmad John

    Three-dimensional virtual worlds are multimedia, simulated environments, often managed over the Web, which users can 'inhabit' and interact via their own graphical, self-representations known as 'avatars'. 3D virtual worlds are being used in many applications: education/training, gaming, social networking, marketing and commerce. Second Life is the most widely used 3D virtual world in education. However, problems associated with usability, navigation and way finding in 3D virtual worlds may impact on student learning and engagement. Based on empirical investigations of learning spaces in Second Life, this paper presents design guidelines to improve the usability and ease of navigation in 3D spaces. Methods of data collection include semi-structured interviews with Second Life students, educators and designers. The findings have revealed that design principles from the fields of urban planning, Human- Computer Interaction, Web usability, geography and psychology can influence the design of spaces in 3D multi-user virtual environments.

  6. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  7. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  8. Implementation of 3d Tools and Immersive Experience Interaction for Supporting Learning in a Library-Archive Environment. Visions and Challenges

    NASA Astrophysics Data System (ADS)

    Angeletaki, A.; Carrozzino, M.; Johansen, S.

    2013-07-01

    In this paper we present an experimental environment of 3D books combined with a game application that has been developed by a collaboration project between the Norwegian University of Science and Technology in Trondheim, Norway the NTNU University Library, and the Percro laboratory of Santa Anna University in Pisa, Italy. MUBIL is an international research project involving museums, libraries and ICT academy partners aiming to develop a consistent methodology enabling the use of Virtual Environments as a metaphor to present manuscripts content through the paradigms of interaction and immersion, evaluating different possible alternatives. This paper presents the results of the application of two prototypes of books augmented with the use of XVR and IL technology. We explore immersive-reality design strategies in archive and library contexts for attracting new users. Our newly established Mubil-lab has invited school classes to test the books augmented with 3D models and other multimedia content in order to investigate whether the immersion in such environments can create wider engagement and support learning. The metaphor of 3D books and game designs in a combination allows the digital books to be handled through a tactile experience and substitute the physical browsing. In this paper we present some preliminary results about the enrichment of the user experience in such environment.

  9. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  10. Evaluation of Home Delivery of Lectures Utilizing 3D Virtual Space Infrastructure

    ERIC Educational Resources Information Center

    Nishide, Ryo; Shima, Ryoichi; Araie, Hiromu; Ueshima, Shinichi

    2007-01-01

    Evaluation experiments have been essential in exploring home delivery of lectures for which users can experience campus lifestyle and distant learning through 3D virtual space. This paper discusses the necessity of virtual space for distant learners by examining the effects of virtual space. The authors have pursued the possibility of…

  11. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  12. 3-D Virtual and Physical Reconstruction of Bendego Iron

    NASA Astrophysics Data System (ADS)

    Belmonte, S. L. R.; Zucolotto, M. E.; Fontes, R. C.; dos Santos, J. R. L.

    2012-09-01

    The use of 3D laser scanning to meteoritic to preserve the original shape of the meteorites before cutting and the facility of saved the datas in STL format (stereolithography) to print three-dimensional physical models and generate a digital replica.

  13. Virtually numbed: immersive video gaming alters real-life experience.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen

    2014-04-01

    As actors in a highly mechanized environment, we are citizens of a world populated not only by fellow humans, but also by virtual characters (avatars). Does immersive video gaming, during which the player takes on the mantle of an avatar, prompt people to adopt the coldness and rigidity associated with robotic behavior and desensitize them to real-life experience? In one study, we correlated participants' reported video-gaming behavior with their emotional rigidity (as indicated by the number of paperclips that they removed from ice-cold water). In a second experiment, we manipulated immersive and nonimmersive gaming behavior and then likewise measured the extent of the participants' emotional rigidity. Both studies yielded reliable impacts, and thus suggest that immersion into a robotic viewpoint desensitizes people to real-life experiences in oneself and others. PMID:24163171

  14. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  15. From Cognitive Capability to Social Reform? Shifting Perceptions of Learning in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi

    2008-01-01

    Learning in immersive virtual worlds (simulations and virtual worlds such as Second Life) could become a central learning approach in many curricula, but the socio-political impact of virtual world learning on higher education remains under-researched. Much of the recent research into learning in immersive virtual worlds centres around games and…

  16. Envisioning the future of home care: applications of immersive virtual reality.

    PubMed

    Brennan, Patricia Flatley; Arnott Smith, Catherine; Ponto, Kevin; Radwin, Robert; Kreutz, Kendra

    2013-01-01

    Accelerating the design of technologies to support health in the home requires 1) better understanding of how the household context shapes consumer health behaviors and (2) the opportunity to afford engineers, designers, and health professionals the chance to systematically study the home environment. We developed the Living Environments Laboratory (LEL) with a fully immersive, six-sided virtual reality CAVE to enable recreation of a broad range of household environments. We have successfully developed a virtual apartment, including a kitchen, living space, and bathroom. Over 2000 people have visited the LEL CAVE. Participants use an electronic wand to activate common household affordances such as opening a refrigerator door or lifting a cup. Challenges currently being explored include creating natural gesture to interface with virtual objects, developing robust, simple procedures to capture actual living environments and rendering them in a 3D visualization, and devising systematic stable terminologies to characterize home environments. PMID:23920626

  17. Cognitive factors associated with immersion in virtual environments

    NASA Technical Reports Server (NTRS)

    Psotka, Joseph; Davison, Sharon

    1993-01-01

    Immersion into the dataspace provided by a computer, and the feeling of really being there or 'presence', are commonly acknowledged as the uniquely important features of virtual reality environments. How immersed one feels appears to be determined by a complex set of physical components and affordances of the environment, and as yet poorly understood psychological processes. Pimentel and Teixeira say that the experience of being immersed in a computer-generated world involves the same mental shift of 'suspending your disbelief for a period of time' as 'when you get wrapped up in a good novel or become absorbed in playing a computer game'. That sounds as if it could be right, but it would be good to get some evidence for these important conclusions. It might be even better to try to connect these statements with theoretical positions that try to do justice to complex cognitive processes. The basic precondition for understanding Virtual Reality (VR) is understanding the spatial representation systems that localize our bodies or egocenters in space. The effort to understand these cognitive processes is being driven with new energy by the pragmatic demands of successful virtual reality environments, but the literature is largely sparse and anecdotal.

  18. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... body) from the National Library of Medicine's Visible Human project (www.nlm.nih.gov). By 1996, Kaufman and his colleagues had patented a pioneering computer software system and techniques for 3-D virtual ...

  19. Spilling the beans on java 3D: a tool for the virtual anatomist.

    PubMed

    Guttmann, G D

    1999-04-15

    The computing world has just provided the anatomist with another tool: Java 3D, within the Java 2 platform. On December 9, 1998, Sun Microsystems released Java 2. Java 3D classes are now included in the jar (Java Archive) archives of the extensions directory of Java 2. Java 3D is also a part of the Java Media Suite of APIs (Application Programming Interfaces). But what is Java? How does Java 3D work? How do you view Java 3D objects? A brief introduction to the concepts of Java and object-oriented programming is provided. Also, there is a short description of the tools of Java 3D and of the Java 3D viewer. Thus, the virtual anatomist has another set of computer tools to use for modeling various aspects of anatomy, such as embryological development. Also, the virtual anatomist will be able to assist the surgeon with virtual surgery using the tools found in Java 3D. Java 3D will be able to fulfill gaps, such as the lack of platform independence, interactivity, and manipulability of 3D images, currently existing in many anatomical computer-aided learning programs. PMID:10321435

  20. Development of a 3D immersive videogame to improve arm-postural coordination in patients with TBI

    PubMed Central

    2011-01-01

    Background Traumatic brain injury (TBI) disrupts the central and executive mechanisms of arm(s) and postural (trunk and legs) coordination. To address these issues, we developed a 3D immersive videogame-- Octopus. The game was developed using the basic principles of videogame design and previous experience of using videogames for rehabilitation of patients with acquired brain injuries. Unlike many other custom-designed virtual environments, Octopus included an actual gaming component with a system of multiple rewards, making the game challenging, competitive, motivating and fun. Effect of a short-term practice with the Octopus game on arm-postural coordination in patients with TBI was tested. Methods The game was developed using WorldViz Vizard software, integrated with the Qualysis system for motion analysis. Avatars of the participant's hands precisely reproducing the real-time kinematic patterns were synchronized with the simulated environment, presented in the first person 3D view on an 82-inch DLP screen. 13 individuals with mild-to-moderate manifestations of TBI participated in the study. While standing in front of the screen, the participants interacted with a computer-generated environment by popping bubbles blown by the Octopus. The bubbles followed a specific trajectory. Interception of the bubbles with the left or right hand avatar allowed flexible use of the postural segments for balance maintenance and arm transport. All participants practiced ten 90-s gaming trials during a single session, followed by a retention test. Arm-postural coordination was analysed using principal component analysis. Results As a result of the short-term practice, the participants improved in game performance, arm movement time, and precision. Improvements were achieved mostly by adapting efficient arm-postural coordination strategies. Of the 13 participants, 10 showed an immediate increase in arm forward reach and single-leg stance time. Conclusion These results support the

  1. Introducing an Avatar Acceptance Model: Student Intention to Use 3D Immersive Learning Tools in an Online Learning Classroom

    ERIC Educational Resources Information Center

    Kemp, Jeremy William

    2011-01-01

    This quantitative survey study examines the willingness of online students to adopt an immersive virtual environment as a classroom tool and compares this with their feelings about more traditional learning modes including our ANGEL learning management system and the Elluminate live Web conferencing tool. I surveyed 1,108 graduate students in…

  2. A Parameterizable Framework for Replicated Experiments in Virtual 3D Environments

    NASA Astrophysics Data System (ADS)

    Biella, Daniel; Luther, Wolfram

    This paper reports on a parameterizable 3D framework that provides 3D content developers with an initial spatial starting configuration, metaphorical connectors for accessing exhibits or interactive 3D learning objects or experiments, and other optional 3D extensions, such as a multimedia room, a gallery, username identification tools and an avatar selection room. The framework is implemented in X3D and uses a Web-based content management system. It has been successfully used for an interactive virtual museum for key historical experiments and in two additional interactive e-learning implementations: an African arts museum and a virtual science centre. It can be shown that, by reusing the framework, the production costs for the latter two implementations can be significantly reduced and content designers can focus on developing educational content instead of producing cost-intensive out-of-focus 3D objects.

  3. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  4. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  5. Design of Learning Spaces in 3D Virtual Worlds: An Empirical Investigation of "Second Life"

    ERIC Educational Resources Information Center

    Minocha, Shailey; Reeves, Ahmad John

    2010-01-01

    "Second Life" (SL) is a three-dimensional (3D) virtual world, and educational institutions are adopting SL to support their teaching and learning. Although the question of how 3D learning spaces should be designed to support student learning and engagement has been raised among SL educators and designers, there is hardly any guidance or research…

  6. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  7. Virtual 3D interactive system with embedded multiwavelength optical sensor array and sequential devices

    NASA Astrophysics Data System (ADS)

    Wang, Guo-Zhen; Huang, Yi-Pai; Hu, Kuo-Jui

    2012-06-01

    We proposed a virtual 3D-touch system by bare finger, which can detect the 3-axis (x, y, z) information of finger. This system has multi-wavelength optical sensor array embedded on the backplane of TFT panel and sequentail devices on the border of TFT panel. We had developed reflecting mode which can be worked by bare finger for the 3D interaction. A 4-inch mobile 3D-LCD with this proposed system was successfully been demonstrated already.

  8. A method of 3-D data information storage with virtual holography

    NASA Astrophysics Data System (ADS)

    Huang, Zhen; Liu, Guodong; Ren, Zhong; Zeng, Lüming

    2008-12-01

    In this paper, a new method of 3-D data cube based on virtual holographic storage is presented. Firstly, the data information is encoded in the form of 3-D data cube with a certain algorithm, in which the interval along coordinates between every data is d. Using the plane-scanning method, the 3-D cube can be described as a assembly of slices which are parallel planes along the coordinates at an interval of d. The dot on the slice represents a bit. The bright one means "1", while the dark one means "0". Secondly, a hologram of the 3-D cube is obtained by computer with virtual optics technology. All the information of a 3-D cube can be described by a 2-D hologram. At last, the hologram is inputted in the SLM, and recorded in the recording material by intersecting two coherent laser beams. When the 3-D data is exported, a reference light illuminates the hologram, and a CCD is used to get the object image which is a hologram of the 3-D data. Then the 3-D data is computed with virtual optical technology. Compared with 2-D data page storage, the 3-D data cube storage has outstanding performance in larger capacity of data storage and higher security of data.

  9. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    ERIC Educational Resources Information Center

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  10. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  11. The Virtual-casing Principle For 3D Toroidal Systems

    SciTech Connect

    Lazerson, Samuel A.

    2014-02-24

    The capability to calculate the magnetic field due to the plasma currents in a toroidally confined magnetic fusion equilibrium is of manifest relevance to equilibrium reconstruction and stellarator divertor design. Two methodologies arise for calculating such quantities. The first being a volume integral over the plasma current density for a given equilibrium. Such an integral is computationally expensive. The second is a surface integral over a surface current on the equilibrium boundary. This method is computationally desirable as the calculation does not grow as the radial resolution of the volume integral. This surface integral method has come to be known as the "virtual-casing principle". In this paper, a full derivation of this method is presented along with a discussion regarding its optimal application.

  12. 3D structure of nucleon with virtuality distributions

    NASA Astrophysics Data System (ADS)

    Radyushkin, Anatoly

    2014-09-01

    We describe a new approach to transverse momentum dependence in hard processes. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0 , z)) describing a hadron with momentum p. Treated as functions of (pz) and z2, they are parametrized through parton virtuality distribution (PVD) Φ (x , σ) , with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z2. For intervals with z+ = 0 , we introduce the transverse momentum distribution (TMD) f (x ,k⊥) , and write it in terms of PVD Φ (x , σ) . The results of covariant calculations, written in terms of Φ (x , σ) are converted into expressions involving f (x ,k⊥) . We propose models for soft PVDs/TMDs,and describe how one can generate high-k⊥ tails of TMDs from primordial soft distributions. We describe a new approach to transverse momentum dependence in hard processes. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0 , z)) describing a hadron with momentum p. Treated as functions of (pz) and z2, they are parametrized through parton virtuality distribution (PVD) Φ (x , σ) , with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z2. For intervals with z+ = 0 , we introduce the transverse momentum distribution (TMD) f (x ,k⊥) , and write it in terms of PVD Φ (x , σ) . The results of covariant calculations, written in terms of Φ (x , σ) are converted into expressions involving f (x ,k⊥) . We propose models for soft PVDs/TMDs,and describe how one can generate high-k⊥ tails of TMDs from primordial soft distributions. Supported by Jefferson Science Associates, LLC under U.S. DOE Contract #DE-AC05-06OR23177 and by U.S. DOE Grant #DE-FG02-97ER41028.

  13. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    NASA Astrophysics Data System (ADS)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  14. How incorporation of scents could enhance immersive virtual experiences.

    PubMed

    Ischer, Matthieu; Baron, Naëm; Mermoud, Christophe; Cayeux, Isabelle; Porcherot, Christelle; Sander, David; Delplanque, Sylvain

    2014-01-01

    Under normal everyday conditions, senses all work together to create experiences that fill a typical person's life. Unfortunately for behavioral and cognitive researchers who investigate such experiences, standard laboratory tests are usually conducted in a nondescript room in front of a computer screen. They are very far from replicating the complexity of real world experiences. Recently, immersive virtual reality (IVR) environments became promising methods to immerse people into an almost real environment that involves more senses. IVR environments provide many similarities to the complexity of the real world and at the same time allow experimenters to constrain experimental parameters to obtain empirical data. This can eventually lead to better treatment options and/or new mechanistic hypotheses. The idea that increasing sensory modalities improve the realism of IVR environments has been empirically supported, but the senses used did not usually include olfaction. In this technology report, we will present an odor delivery system applied to a state-of-the-art IVR technology. The platform provides a three-dimensional, immersive, and fully interactive visualization environment called "Brain and Behavioral Laboratory-Immersive System" (BBL-IS). The solution we propose can reliably deliver various complex scents during different virtual scenarios, at a precise time and space and without contamination of the environment. The main features of this platform are: (i) the limited cross-contamination between odorant streams with a fast odor delivery (< 500 ms), (ii) the ease of use and control, and (iii) the possibility to synchronize the delivery of the odorant with pictures, videos or sounds. How this unique technology could be used to investigate typical research questions in olfaction (e.g., emotional elicitation, memory encoding or attentional capture by scents) will also be addressed. PMID:25101017

  15. How incorporation of scents could enhance immersive virtual experiences

    PubMed Central

    Ischer, Matthieu; Baron, Naëm; Mermoud, Christophe; Cayeux, Isabelle; Porcherot, Christelle; Sander, David; Delplanque, Sylvain

    2014-01-01

    Under normal everyday conditions, senses all work together to create experiences that fill a typical person's life. Unfortunately for behavioral and cognitive researchers who investigate such experiences, standard laboratory tests are usually conducted in a nondescript room in front of a computer screen. They are very far from replicating the complexity of real world experiences. Recently, immersive virtual reality (IVR) environments became promising methods to immerse people into an almost real environment that involves more senses. IVR environments provide many similarities to the complexity of the real world and at the same time allow experimenters to constrain experimental parameters to obtain empirical data. This can eventually lead to better treatment options and/or new mechanistic hypotheses. The idea that increasing sensory modalities improve the realism of IVR environments has been empirically supported, but the senses used did not usually include olfaction. In this technology report, we will present an odor delivery system applied to a state-of-the-art IVR technology. The platform provides a three-dimensional, immersive, and fully interactive visualization environment called “Brain and Behavioral Laboratory—Immersive System” (BBL-IS). The solution we propose can reliably deliver various complex scents during different virtual scenarios, at a precise time and space and without contamination of the environment. The main features of this platform are: (i) the limited cross-contamination between odorant streams with a fast odor delivery (< 500 ms), (ii) the ease of use and control, and (iii) the possibility to synchronize the delivery of the odorant with pictures, videos or sounds. How this unique technology could be used to investigate typical research questions in olfaction (e.g., emotional elicitation, memory encoding or attentional capture by scents) will also be addressed. PMID:25101017

  16. Understanding Human Perception of Building Categories in Virtual 3d Cities - a User Study

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Becker, S.; Niese, T.; Deussen, O.; Fritsch, D.

    2016-06-01

    Virtual 3D cities are becoming increasingly important as a means of visually communicating diverse urban-related information. To get a deeper understanding of a human's cognitive experience of virtual 3D cities, this paper presents a user study on the human ability to perceive building categories (e.g. residential home, office building, building with shops etc.) from geometric 3D building representations. The study reveals various dependencies between geometric properties of the 3D representations and the perceptibility of the building categories. Knowledge about which geometries are relevant, helpful or obstructive for perceiving a specific building category is derived. The importance and usability of such knowledge is demonstrated based on a perception-guided 3D building abstraction process.

  17. Simulation and visualization of mechanical systems in immersive virtual environments

    SciTech Connect

    Canfield, T. R.

    1998-04-17

    A prototype for doing real-time simulation of mechanical systems in immersive virtual environments has been developed to run in the CAVE and on the ImmersaDesk at Argonne National Laboratory. This system has three principal software components: a visualization component for rendering the model and providing a user interface, communications software, and mechanics simulation software. The system can display the three-dimensional objects in the CAVE and project various scalar fields onto the exterior surface of the objects during real-time execution.

  18. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  19. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the

  20. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  1. Can immersive virtual reality reduce phantom limb pain?

    PubMed

    Murray, Craig D; Patchick, Emma L; Caillette, Fabrice; Howard, Toby; Pettifer, Stephen

    2006-01-01

    This paper describes the design and implementation of a case-study based investigation using immersive virtual reality as a treatment for phantom limb pain. The authors' work builds upon prior research which has found the use of a mirror box (where the amputee sees a mirror image of their remaining anatomical limb in the phenomenal space of their amputated limb) can reduce phantom limb pain and voluntary movement to paralyzed phantom limbs for some amputees. The present project involves the transposition of movements made by amputees' anatomical limb into movements of a virtual limb which is presented in the phenomenal space of their phantom limb. The three case studies presented here provide qualitative data which provide tentative support for the use of this system for phantom pain relief. The authors suggest the need for further research using control trials. PMID:16404088

  2. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  3. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  4. High refractive index immersion liquid for superresolution 3D imaging using sapphire-based aplanatic numerical aperture increasing lens optics.

    PubMed

    Laskar, Junaid M; Shravan Kumar, P; Herminghaus, Stephan; Daniels, Karen E; Schröter, Matthias

    2016-04-20

    Optically transparent immersion liquids with refractive index (n∼1.77) to match the sapphire-based aplanatic numerical aperture increasing lens (aNAIL) are necessary for achieving deep 3D imaging with high spatial resolution. We report that antimony tribromide (SbBr3) salt dissolved in liquid diiodomethane (CH2I2) provides a new high refractive index immersion liquid for optics applications. The refractive index is tunable from n=1.74 (pure) to n=1.873 (saturated), by adjusting either salt concentration or temperature; this allows it to match (or even exceed) the refractive index of sapphire. Importantly, the solution gives excellent light transmittance in the ultraviolet to near-infrared range, an improvement over commercially available immersion liquids. This refractive-index-matched immersion liquid formulation has enabled us to develop a sapphire-based aNAIL objective that has both high numerical aperture (NA=1.17) and long working distance (WD=12  mm). This opens up new possibilities for deep 3D imaging with high spatial resolution. PMID:27140083

  5. Nomad devices for interactions in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel

    2013-03-01

    Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.

  6. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  7. Applying a 3D Situational Virtual Learning Environment to the Real World Business--An Extended Research in Marketing

    ERIC Educational Resources Information Center

    Wang, Shwu-huey

    2012-01-01

    In order to understand (1) what kind of students can be facilitated through the help of three-dimensional virtual learning environment (3D VLE), and (2) the relationship between a conventional test (ie, paper and pencil test) and the 3D VLE used in this study, the study designs a 3D virtual supermarket (3DVS) to help students transform their role…

  8. Image-Based Rendering of LOD1 3D City Models for traffic-augmented Immersive Street-view Navigation

    NASA Astrophysics Data System (ADS)

    Brédif, M.

    2013-10-01

    It may be argued that urban areas may now be modeled with sufficient details for realistic fly-through over the cities at a reasonable price point. Modeling cities at the street level for immersive street-view navigation is however still a very expensive (or even impossible) operation if one tries to match the level of detail acquired by street-view mobile mapping imagery. This paper proposes to leverage the richness of these street-view images with the common availability of nation-wide LOD1 3D city models, using an image-based rendering technique : projective multi-texturing. Such a coarse 3D city model may be used as a lightweight scene proxy of approximate coarse geometry. The images neighboring the interpolated viewpoint are projected onto this scene proxy using their estimated poses and calibrations and blended together according to their relative distance. This enables an immersive navigation within the image dataset that is perfectly equal to - and thus as rich as - original images when viewed from their viewpoint location, and which degrades gracefully in between viewpoint locations. Beyond proving the applicability of this preprocessing-free computer graphics technique to mobile mapping images and LOD1 3D city models, our contributions are three-fold. Firstly, image distortion is corrected online in the GPU, preventing an extra image resampling step. Secondly, externally-computed binary masks may be used to discard pixels corresponding to moving objects. Thirdly, we propose a shadowmap-inspired technique that prevents, at marginal cost, the projective texturing of surfaces beyond the first, as seen from the projected image viewpoint location. Finally, an augmented visualization application is introduced to showcase the proposed immersive navigation: images are unpopulated from vehicles using externally-computed binary masks and repopulated using a 3D visualization of a 2D traffic simulation.

  9. Approach to Constructing 3d Virtual Scene of Irrigation Area Using Multi-Source Data

    NASA Astrophysics Data System (ADS)

    Cheng, S.; Dou, M.; Wang, J.; Zhang, S.; Chen, X.

    2015-10-01

    For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS), remote sensing (RS) technology. Based on multi-source data such as Google Earth (GE) high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  10. Assessment of radiation awareness training in immersive virtual environments

    NASA Astrophysics Data System (ADS)

    Whisker, Vaughn E., III

    The prospect of new nuclear power plant orders in the near future and the graying of the current workforce create a need to train new personnel faster and better. Immersive virtual reality (VR) may offer a solution to the training challenge. VR technology presented in a CAVE Automatic Virtual Environment (CAVE) provides a high-fidelity, one-to-one scale environment where areas of the power plant can be recreated and virtual radiation environments can be simulated, making it possible to safely expose workers to virtual radiation in the context of the actual work environment. The use of virtual reality for training is supported by many educational theories; constructivism and discovery learning, in particular. Educational theory describes the importance of matching the training to the task. Plant access training and radiation worker training, common forms of training in the nuclear industry, rely on computer-based training methods in most cases, which effectively transfer declarative knowledge, but are poor at transferring skills. If an activity were to be added, the training would provide personnel with the opportunity to develop skills and apply their knowledge so they could be more effective when working in the radiation environment. An experiment was developed to test immersive virtual reality's suitability for training radiation awareness. Using a mixed methodology of quantitative and qualitative measures, the subjects' performances before and after training were assessed. First, subjects completed a pre-test to measure their knowledge prior to completing any training. Next they completed unsupervised computer-based training, which consisted of a PowerPoint presentation and a PDF document. After completing a brief orientation activity in the virtual environment, one group of participants received supplemental radiation awareness training in a simulated radiation environment presented in the CAVE, while a second group, the control group, moved directly to the

  11. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  12. Teaching Digital Natives: 3-D Virtual Science Lab in the Middle School Science Classroom

    ERIC Educational Resources Information Center

    Franklin, Teresa J.

    2008-01-01

    This paper presents the development of a 3-D virtual environment in Second Life for the delivery of standards-based science content for middle school students in the rural Appalachian region of Southeast Ohio. A mixed method approach in which quantitative results of improved student learning and qualitative observations of implementation within…

  13. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    ERIC Educational Resources Information Center

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  14. Socialisation for Learning at a Distance in a 3-D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Edirisingha, Palitha; Nie, Ming; Pluciennik, Mark; Young, Ruth

    2009-01-01

    This paper reports findings of a pilot study that examined the pedagogical potential of "Second Life" (SL), a popular three-dimensional multi-user virtual environment (3-D MUVE) developed by the Linden Lab. The study is part of a 1-year research and development project titled "Modelling of Secondlife Environments" (http://www.le.ac.uk/moose)…

  15. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    ERIC Educational Resources Information Center

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  16. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  17. The Cognitive Apprenticeship Theory for the Teaching of Mathematics in an Online 3D Virtual Environment

    ERIC Educational Resources Information Center

    Bouta, Hara; Paraskeva, Fotini

    2013-01-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective.…

  18. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  19. Three Primary School Students' Cognition about 3D Rotation in a Virtual Reality Learning Environment

    ERIC Educational Resources Information Center

    Yeh, Andy

    2010-01-01

    This paper reports on three primary school students' explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students…

  20. GEARS a 3D Virtual Learning Environment and Virtual Social and Educational World Used in Online Secondary Schools

    ERIC Educational Resources Information Center

    Barkand, Jonathan; Kush, Joseph

    2009-01-01

    Virtual Learning Environments (VLEs) are becoming increasingly popular in online education environments and have multiple pedagogical advantages over more traditional approaches to education. VLEs include 3D worlds where students can engage in simulated learning activities such as Second Life. According to Claudia L'Amoreaux at Linden Lab, "at…

  1. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry

    NASA Astrophysics Data System (ADS)

    Villarrubia, J. S.; Tondare, V. N.; Vladár, A. E.

    2016-03-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples—mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  2. Heart rate variability (HRV) during virtual reality immersion

    PubMed Central

    Malińska, Marzena; Zużewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2015-01-01

    The goal of the study was assessment of the hour-long training involving handling virtual environment (sVR) and watching a stereoscopic 3D movie on the mechanisms of autonomic heart rate (HR) regulation among the subjects who were not predisposed to motion sickness. In order to exclude predispositions to motion sickness, all the participants (n=19) underwent a Coriolis test. During an exposure to 3D and sVR the ECG signal was continuously recorded using the Holter method. For the twelve consecutive 5-min epochs of ECG signal, the analysis of heart rate variability (HRV) in time and frequency domains was conducted. After 30 min from the beginning of the training in handling the virtual workstation a significant increase in LF spectral power was noted. The values of the sympathovagal LF/HF index while sVR indicated a significant increase in sympathetic predominance in four time intervals, namely between the 5th and the 10th minute, between the 15th and the 20th minute, between the 35th and 40th minute and between the 55th and the 60th minute of exposure. PMID:26327262

  3. Heart rate variability (HRV) during virtual reality immersion.

    PubMed

    Malińska, Marzena; Zużewicz, Krystyna; Bugajska, Joanna; Grabowski, Andrzej

    2015-01-01

    The goal of the study was assessment of the hour-long training involving handling virtual environment (sVR) and watching a stereoscopic 3D movie on the mechanisms of autonomic heart rate (HR) regulation among the subjects who were not predisposed to motion sickness. In order to exclude predispositions to motion sickness, all the participants (n=19) underwent a Coriolis test. During an exposure to 3D and sVR the ECG signal was continuously recorded using the Holter method. For the twelve consecutive 5-min epochs of ECG signal, the analysis of heart rate variability (HRV) in time and frequency domains was conducted. After 30 min from the beginning of the training in handling the virtual workstation a significant increase in LF spectral power was noted. The values of the sympathovagal LF/HF index while sVR indicated a significant increase in sympathetic predominance in four time intervals, namely between the 5th and the 10th minute, between the 15th and the 20th minute, between the 35th and 40th minute and between the 55th and the 60th minute of exposure. PMID:26327262

  4. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  5. Using virtual 3D audio in multispeech channel and multimedia environments

    NASA Astrophysics Data System (ADS)

    Orosz, Michael D.; Karplus, Walter J.; Balakrishnan, Jerry D.

    2000-08-01

    The advantages and disadvantages of using virtual 3-D audio in mission-critical, multimedia display interfaces were evaluated. The 3D audio platform seems to be an especially promising candidate for aircraft cockpits, flight control rooms, and other command and control environments in which operators must make mission-critical decisions while handling demanding and routine tasks. Virtual audio signal processing creates the illusion for a listener wearing conventional earphones that each of a multiplicity of simultaneous speech or audio channels is originating from a different, program- specified location in virtual space. To explore the possible uses of this new, readily available technology, a test bed simulating some of the conditions experienced by the chief flight test coordinator at NASA's Dryden Flight Research Center was designed and implemented. Thirty test subjects simultaneously performed routine tasks requiring constant hand-eye coordination, while monitoring four speech channels, each generating continuous speech signals, for the occurrence of pre-specified keywords. Performance measures included accuracy in identifying the keywords, accuracy in identifying the speaker of the keyword, and response time. We found substantial improvements on all of these measures when comparing virtual audio with conventional, monaural transmissions. We also explored the effect on operator performance of different spatial configurations of the audio sources in 3-D space, simulated movement (dither) in the source locations, and of providing graphical redundancy. Some of these manipulations were less effective and may even decrease performance efficiency, even though they improve some aspects of the virtual space simulation.

  6. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  7. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius.

    PubMed

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-11-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008 and 2012 with virtual planning and PSIs for a combined intra- and extraarticular malunion of the distal radius. The accuracy of the correction is quantified by comparing the virtual three-dimensional (3D) planning model with the postoperative 3D bone model. For the extraarticular malunion the 3D volar tilt, 3D radial inclination and 3D ulnar variance are measured. The volar tilt is undercorrected in all cases with an average of -6 ± 6°. The average difference between the postoperative and planned 3D radial inclination was -1 ± 5°. The average difference between the postoperative and planned 3D ulnar variances is 0 ± 1 mm. For the evaluation of the intraarticular malunion, both the arc method of measurement and distance map measurement are used. The average postoperative maximum gap is 2.1 ± 0.9 mm. The average maximum postoperative step-off is 1.3 ± 0.4 mm. The average distance between the postoperative and planned articular surfaces is 1.1 ± 0.6 mm as determined in the distance map measurement. There is a tendency to achieve higher accuracy as experience builds up, both on the surgeon's side and on the design engineering side. We believe this technology holds the potential to achieve consistent accuracy of very complex corrections. PMID:24436834

  8. Accuracy of 3D Virtual Planning of Corrective Osteotomies of the Distal Radius

    PubMed Central

    Stockmans, Filip; Dezillie, Marleen; Vanhaecke, Jeroen

    2013-01-01

    Corrective osteotomies of the distal radius for symptomatic malunion are time-tested procedures that rely on accurate corrections. Patients with combined intra- and extra-articular malunions present a challenging deformity. Virtual planning and patient-specific instruments (PSIs) to transfer the planning into the operating room have been used both to simplify the surgery and to make it more accurate. This report focuses on the clinically achieved accuracy in four patients treated between 2008 and 2012 with virtual planning and PSIs for a combined intra- and extraarticular malunion of the distal radius. The accuracy of the correction is quantified by comparing the virtual three-dimensional (3D) planning model with the postoperative 3D bone model. For the extraarticular malunion the 3D volar tilt, 3D radial inclination and 3D ulnar variance are measured. The volar tilt is undercorrected in all cases with an average of –6 ± 6°. The average difference between the postoperative and planned 3D radial inclination was –1 ± 5°. The average difference between the postoperative and planned 3D ulnar variances is 0 ± 1 mm. For the evaluation of the intraarticular malunion, both the arc method of measurement and distance map measurement are used. The average postoperative maximum gap is 2.1 ± 0.9 mm. The average maximum postoperative step-off is 1.3 ± 0.4 mm. The average distance between the postoperative and planned articular surfaces is 1.1 ± 0.6 mm as determined in the distance map measurement. There is a tendency to achieve higher accuracy as experience builds up, both on the surgeon's side and on the design engineering side. We believe this technology holds the potential to achieve consistent accuracy of very complex corrections. PMID:24436834

  9. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    PubMed

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. PMID:27046584

  10. Effect of viewing mode on pathfinding in immersive Virtual Reality.

    PubMed

    White, Paul J; Byagowi, Ahmad; Moussavi, Zahra

    2015-08-01

    The use of Head Mounted Displays (HMDs) to view Virtual Reality Environments (VREs) has received much attention recently. This paper reports on the difference between actual humans' navigation in a VRE viewed through an HMD compared to that in the same VRE viewed on a laptop PC display. A novel Virtual Reality (VR) Navigation input device (VRNChair), designed by our team, was paired with an Oculus Rift DK2 Head-Mounted Display (HMD). People used the VRNChair to navigate a VRE, and we analyzed their navigational trajectories with and without the HMD to investigate plausible differences in performance due to the display device. It was found that people's navigational trajectories were more accurate while wearing the HMD compared to viewing an LCD monitor; however, the duration to complete a navigation task remained the same. This implies that increased immersion in VR results in an improvement in pathfinding. In addition, motion sickness caused by using an HMD can be reduced if one uses an input device such as our VRNChair. The VRNChair paired with an HMD provides vestibular stimulation as one moves in the VRE, because movements in the VRE are synchronized with movements in the real environment. PMID:26737323

  11. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  12. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  13. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  14. Representing 3D virtual objects: interaction between visuo-spatial ability and type of exploration.

    PubMed

    Meijer, Frank; van den Broek, Egon L

    2010-03-17

    We investigated individual differences in interactively exploring 3D virtual objects. 36 participants explored 24 simple and 24 difficult objects (composed of respectively three and five Biederman geons) actively, passively, or not at all. Both their 3D mental representation of the objects and visuo-spatial ability was assessed. Results show that, regardless of the object's complexity, people with a low VSA benefit from active exploration of objects, where people with a middle or high VSA do not. These findings extend and refine earlier research on interactively learning visuo-spatial information and underline the importance to take individual differences into account. PMID:20116394

  15. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    PubMed Central

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  16. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    PubMed

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  17. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  18. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    ERIC Educational Resources Information Center

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  19. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    SciTech Connect

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M; Kettunen, L.

    1995-08-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed.

  20. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    PubMed

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology. PMID:8591413

  1. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  2. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  3. An improved virtual aberration model to simulate mask 3D and resist effects

    NASA Astrophysics Data System (ADS)

    Kanaya, Reiji; Fujii, Koichi; Imai, Motokatsu; Matsuyama, Tomoyuki; Tsuzuki, Takao; Lin, Qun Ying

    2015-03-01

    As shrinkage of design features progresses, the difference in best focus positions among different patterns is becoming a fatal issue, especially when many patterns co-exist in a layer. The problem arises from three major factors: aberrations of projection optics, mask 3D topography effects, and resist thickness effects. Aberrations in projection optics have already been thoroughly investigated, but mask 3D topography effects and resist thickness effects are still under study. It is well known that mask 3D topography effects can be simulated by various Electro-magnetic Field (EMF) analysis methods. However, it is almost impossible to use them for full chip modeling because all of these methods are extremely computationally intensive. Consequently, they usually apply only to a limited range of mask patterns which are about tens of square micro meters in area. Resist thickness effects on best focus positions are rarely treated as a topic of lithography investigations. Resist 3D effects are treated mostly for resist profile prediction, which also requires an intensive EMF analysis when one needs to predict it accurately. In this paper, we present a simplified Virtual Aberration (VA) model to simulate both mask 3D induced effects and resist thickness effects. A conventional simulator, when applied with this simplified method, can factor in both mask 3D topography effects and resist thickness effects. Thus it can be used to model inter-pattern Best Focus Difference (BFD) issues with the least amount of rigorous EMF analysis.

  4. An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard

    2014-05-01

    In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are

  5. Chaotic orbits tracked by a 3D asymmetric immersed solid at high Reynolds numbers using a novel Gerris-Immersed Solid (DNS) Solver

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Valluri, Prashant; Govindarajan, Rama

    2014-11-01

    The motion of a neutrally buoyant ellipsoidal solid with an initial momentum has been theoretically predicted to be chaotic in inviscid flow by Aref (1993). On the other hand, the particle could stop moving when the damping viscous force is strong enough. This work provides numerical evidence for 3D chaotic motion of a neutrally buoyant general ellipsoidal solid and suggests criteria for triggering this motion. The study also shows that the translational/rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density aspect ratios also have some influence on the chaotic behaviour. We have developed a novel variant of the immersed solid solver under the framework of the Gerris flow package of Popinet et al. (2003). Our solid solver, the Gerris Immersed Solid Solver (GISS), is capable of handling 6 degree-of-freedom motion of particles with arbitrary geometry and number in three-dimensions and can precisely predict the hydrodynamic interactions and their effects on particle trajectories. The reliability and accuracy have been checked by a series of classical studies, testing both translational and rotational motions with a vast range of flow properties.

  6. Human fear conditioning conducted in full immersion 3-dimensional virtual reality.

    PubMed

    Huff, Nicole C; Zeilinski, David J; Fecteau, Matthew E; Brady, Rachael; LaBar, Kevin S

    2010-01-01

    conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction. PMID:20736913

  7. The cognitive apprenticeship theory for the teaching of mathematics in an online 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Bouta, Hara; Paraskeva, Fotini

    2013-03-01

    Research spanning two decades shows that there is a continuing development of 3D virtual worlds and investment in such environments for educational purposes. Research stresses the need for these environments to be well-designed and for suitable pedagogies to be implemented in the teaching practice in order for these worlds to be fully effective. To this end, we propose a pedagogical framework based on the cognitive apprenticeship for deriving principles and guidelines to inform the design, development and use of a 3D virtual environment. This study examines how the use of a 3D virtual world facilitates the teaching of mathematics in primary education by combining design principles and guidelines based on the Cognitive Apprenticeship Theory and the teaching methods that this theory introduces. We focus specifically on 5th and 6th grade students' engagement (behavioral, affective and cognitive) while learning fractional concepts over a period of two class sessions. Quantitative and qualitative analyses indicate considerable improvement in the engagement of the students who participated in the experiment. This paper presents the findings regarding students' cognitive engagement in the process of comprehending basic fractional concepts - notoriously hard for students to master. The findings are encouraging and suggestions are made for further research.

  8. 2D virtual texture on 3D real object with coded structured light

    NASA Astrophysics Data System (ADS)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  9. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  10. Effects of 3D Virtual Reality of Plate Tectonics on Fifth Grade Students' Achievement and Attitude toward Science

    ERIC Educational Resources Information Center

    Kim, Paul

    2006-01-01

    This study examines the effects of a teaching method using 3D virtual reality simulations on achievement and attitude toward science. An experiment was conducted with fifth-grade students (N = 41) to examine the effects of 3D simulations, designed to support inquiry-based science curriculum. An ANOVA analysis revealed that the 3D group scored…

  11. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    ERIC Educational Resources Information Center

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most…

  12. The Rufous Hummingbird in hovering flight -- full-body 3D immersed boundary simulation

    NASA Astrophysics Data System (ADS)

    Ferreira de Sousa, Paulo; Luo, Haoxiang; Bocanegra Evans, Humberto

    2009-11-01

    Hummingbirds are an interesting case study for the development of micro-air vehicles since they combine the high flight stability of insects with the low metabolic power per unit of body mass of bats, during hovering flight. In this study, simulations of a full-body hummingbird in hovering flight were performed at a Reynolds number around 3600. The simulations employ a versatile sharp-interface immersed boundary method recently enhanced at our lab that can treat thin membranes and solid bodies alike. Implemented on a Cartesian mesh, the numerical method allows us to capture the vortex dynamics of the wake accurately and efficiently. The whole-body simulation will allow us to clearly identify the three general patterns of flow velocity around the body of the hummingbird referred in Altshuler et al. (Exp Fluids 46 (5), 2009). One focus of the current study is to understand the interaction between the wakes of the two wings at the end of the upstroke, and how the tail actively defects the flow to contribute to pitch stability. Another focus of the study will be to identify the pair of unconnected loops underneath each wing.

  13. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  14. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  15. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  16. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  17. Early pregnancy placental bed and fetal vascular volume measurements using 3-D virtual reality.

    PubMed

    Reus, Averil D; Klop-van der Aa, Josine; Rifouna, Maria S; Koning, Anton H J; Exalto, Niek; van der Spek, Peter J; Steegers, Eric A P

    2014-08-01

    In this study, a new 3-D Virtual Reality (3D VR) technique for examining placental and uterine vasculature was investigated. The validity of placental bed vascular volume (PBVV) and fetal vascular volume (FVV) measurements was assessed and associations of PBVV and FVV with embryonic volume, crown-rump length, fetal birth weight and maternal parity were investigated. One hundred thirty-two patients were included in this study, and measurements were performed in 100 patients. Using V-Scope software, 100 3-D Power Doppler data sets of 100 pregnancies at 12 wk of gestation were analyzed with 3D VR in the I-Space Virtual Reality system. Volume measurements were performed with semi-automatic, pre-defined parameters. The inter-observer and intra-observer agreement was excellent with all intra-class correlation coefficients >0.93. PBVVs of multiparous women were significantly larger than the PBVVs of primiparous women (p = 0.008). In this study, no other associations were found. In conclusion, V-Scope offers a reproducible method for measuring PBVV and FVV at 12 wk of gestation, although we are unsure whether the volume measured represents the true volume of the vasculature. Maternal parity influences PBVV. PMID:24798392

  18. Fast extraction of minimal paths in 3D images and applications to virtual endoscopy.

    PubMed

    Deschamps, T; Cohen, L D

    2001-12-01

    The aim of this article is to build trajectories for virtual endoscopy inside 3D medical images, using the most automatic way. Usually the construction of this trajectory is left to the clinician who must define some points on the path manually using three orthogonal views. But for a complex structure such as the colon, those views give little information on the shape of the object of interest. The path construction in 3D images becomes a very tedious task and precise a priori knowledge of the structure is needed to determine a suitable trajectory. We propose a more automatic path tracking method to overcome those drawbacks: we are able to build a path, given only one or two end points and the 3D image as inputs. This work is based on previous work by Cohen and Kimmel [Int. J. Comp. Vis. 24 (1) (1997) 57] for extracting paths in 2D images using Fast Marching algorithm. Our original contribution is twofold. On the first hand, we present a general technical contribution which extends minimal paths to 3D images and gives new improvements of the approach that are relevant in 2D as well as in 3D to extract linear structures in images. It includes techniques to make the path extraction scheme faster and easier, by reducing the user interaction. We also develop a new method to extract a centered path in tubular structures. Synthetic and real medical images are used to illustrate each contribution. On the other hand, we show that our method can be efficiently applied to the problem of finding a centered path in tubular anatomical structures with minimum interactivity, and that this path can be used for virtual endoscopy. Results are shown in various anatomical regions (colon, brain vessels, arteries) with different 3D imaging protocols (CT, MR). PMID:11731307

  19. Enabling Field Experiences in Introductory Geoscience Classes through the Use of Immersive Virtual Reality

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.; Smith, E.; Sellers, V.; Wyant, P.; Boyer, D. M.; Mobley, C.; Brame, S.

    2015-12-01

    Although field experiences are an important aspect of geoscience education, the opportunity to provide physical world experiences to large groups of introductory students is often limited by access, logistical, and financial constraints. Our project (NSF IUSE 1504619) is investigating the use of immersive virtual reality (VR) technologies as a surrogate for real field experiences in introductory geosciences classes. We are developing a toolbox that leverages innovations in the field of VR, including the Oculus Rift and Google Cardboard, to enable every student in an introductory geology classroom the opportunity to have a first-person virtual field experience in the Grand Canyon. We have opted to structure our VR experience as an interactive game where students must explore the Canyon to accomplish a series of tasks designed to emphasize key aspects of geoscience learning. So far we have produced two demo products for the virtual field trip. The first is a standalone "Rock Box" app developed for the iPhone, which allows students to select different rock samples, examine them in 3D, and obtain basic information about the properties of each sample. The app can act as a supplement to the traditional rock box used in physical geology labs. The second product is a fully functioning VR environment for the Grand Canyon developed using satellite-based topographic and imagery data to retain real geologic features within the experience. Players can freely navigate to explore anywhere they desire within the Canyon, but are guided to points of interest where they are able to complete exercises that will be aligned with specific learning goals. To this point we have integrated elements of the "Rock Box" app within the VR environment, allowing players to examine 3D details of rock samples they encounter within the Grand Canyon. We plan to provide demos of both products and obtain user feedback during our presentation.

  20. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  1. Virtual Spring-Based 3D Multi-Agent Group Coordination

    NASA Astrophysics Data System (ADS)

    Daneshvar, Roozbeh; Shih, Liwen

    As future personal vehicles start enjoying the ability to fly, tackling safe transportation coordination can be a tremendous task, far beyond the current challenge on radar screen monitoring of the already saturated air traffic control. Our focus is on the distributed safe-distance coordination among a group of autonomous flying vehicle agents, where each follows its own current straight-line direction in a 3D space with variable speeds. A virtual spring-based model is proposed for the group coordination. Within a specified neighborhood radius, each vehicle forms a virtual connection with each neighbor vehicle by a virtual spring. As the vehicle changes its position, speed and altitude, the total resultant forces on each virtual spring try to maintain zero by moving to the mechanical equilibrium point. The agents then add the simple total virtual spring constraints to their movements to determine their next positions individually. Together, the multi-agent vehicles reach a group behavior, where each of them keeps a minimal safe-distance with others. A new safe behavior thus arises in the group level. With the proposed virtual spring coordination model, the vehicles need no direct communication with each other, require only minimum local processing resources, and the control is completely distributed. New behaviors can now be formulated and studied based on the proposed model, e.g., how a fast driving vehicle can find its way though the crowd by avoiding the other vehicles effortlessly1.

  2. Investigating the interaction between positions and signals of height-channel loudspeakers in reproducing immersive 3d sound

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Antonios

    Since transmission capacities have significantly increased over the past few years, researchers are now able to transmit a larger amount of data, namely multichannel audio content, in the consumer applications. What has not been investigated in a systematic way yet is how to deliver the multichannel content. Specifically, researchers' attention is focused on the quest of a standardized immersive reproduction format that incorporates height loudspeakers coupled with the new high-resolution and three-dimensional (3D) media content for a comprehensive 3D experience. To better understand and utilize the immersive audio reproduction, this research focused on the (1) interaction between the positioning of height loudspeakers and the signals fed to the loudspeakers, (2) investigation of the perceptual characteristics associated with the height ambiences, and (3) the influence of inverse filtering on perceived sound quality for the realistic 3D sound reproduction. The experiment utilized the existence of two layers of loudspeakers: horizontal layer following the ITU-R BS.775 five-channel loudspeaker configuration and height layer locating a total of twelve loudspeakers at the azimuth of +/-30°, +/-50°, +/-70°, +/-90°, +/-110° and +/-130° and elevation of 30°. Eight configurations were formed, each of which selected four height-loudspeakers from twelve. In the subjective evaluation, listeners compared, ranked and described the eight randomly presented configurations of 4-channel height ambiences. The stimuli for the experiment were four nine-channel (5 channels for the horizontal and 4 for the height loudspeakers) multichannel music. Moreover, an approach of Finite Impulse Response (FIR) inverse filtering was attempted, in order to remove the particular room's acoustic influence. Another set of trained professionals was informally asked to use descriptors to characterize the newly presented multichannel music with height ambiences rendered with inverse filtering. The

  3. Blood Pool Segmentation Results in Superior Virtual Cardiac Models than Myocardial Segmentation for 3D Printing.

    PubMed

    Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier

    2016-08-01

    The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed

  4. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  5. 3D resolution enhancement of deep-tissue imaging based on virtual spatial overlap modulation microscopy.

    PubMed

    Su, I-Cheng; Hsu, Kuo-Jen; Shen, Po-Ting; Lin, Yen-Yin; Chu, Shi-Wei

    2016-07-25

    During the last decades, several resolution enhancement methods for optical microscopy beyond diffraction limit have been developed. Nevertheless, those hardware-based techniques typically require strong illumination, and fail to improve resolution in deep tissue. Here we develop a high-speed computational approach, three-dimensional virtual spatial overlap modulation microscopy (3D-vSPOM), which immediately solves the strong-illumination issue. By amplifying only the spatial frequency component corresponding to the un-scattered point-spread-function at focus, plus 3D nonlinear value selection, 3D-vSPOM shows significant resolution enhancement in deep tissue. Since no iteration is required, 3D-vSPOM is much faster than iterative deconvolution. Compared to non-iterative deconvolution, 3D-vSPOM does not need a priori information of point-spread-function at deep tissue, and provides much better resolution enhancement plus greatly improved noise-immune response. This method is ready to be amalgamated with two-photon microscopy or other laser scanning microscopy to enhance deep-tissue resolution. PMID:27464077

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  7. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  8. NASA Virtual Glovebox: An Immersive Virtual Desktop Environment for Training Astronauts in Life Science Experiments

    NASA Technical Reports Server (NTRS)

    Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard

    2003-01-01

    The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.

  9. Dynamic WIFI-Based Indoor Positioning in 3D Virtual World

    NASA Astrophysics Data System (ADS)

    Chan, S.; Sohn, G.; Wang, L.; Lee, W.

    2013-11-01

    A web-based system based on the 3DTown project was proposed using Google Earth plug-in that brings information from indoor positioning devices and real-time sensors into an integrated 3D indoor and outdoor virtual world to visualize the dynamics of urban life within the 3D context of a city. We addressed limitation of the 3DTown project with particular emphasis on video surveillance camera used for indoor tracking purposes. The proposed solution was to utilize wireless local area network (WLAN) WiFi as a replacement technology for localizing objects of interest due to the wide spread availability and large coverage area of WiFi in indoor building spaces. Indoor positioning was performed using WiFi without modifying existing building infrastructure or introducing additional access points (AP)s. A hybrid probabilistic approach was used for indoor positioning based on previously recorded WiFi fingerprint database in the Petrie Science and Engineering building at York University. In addition, we have developed a 3D building modeling module that allows for efficient reconstruction of outdoor building models to be integrated with indoor building models; a sensor module for receiving, distributing, and visualizing real-time sensor data; and a web-based visualization module for users to explore the dynamic urban life in a virtual world. In order to solve the problems in the implementation of the proposed system, we introduce approaches for integration of indoor building models with indoor positioning data, as well as real-time sensor information and visualization on the web-based system. In this paper we report the preliminary results of our prototype system, demonstrating the system's capability for implementing a dynamic 3D indoor and outdoor virtual world that is composed of discrete modules connected through pre-determined communication protocols.

  10. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  11. The Road Less Travelled: The Journey of Immersion into the Virtual Field

    ERIC Educational Resources Information Center

    Fitzsimons, Sabrina

    2013-01-01

    This article provides an account of my experience of immersion as a third-level teacher into the three-dimensional multi-user virtual world Second Life for research purposes. An ethnographic methodology was employed. Three stages in this journey are identified: separation, transition and transformation. In presenting this journey of immersion, it…

  12. Immersive virtual reality and environmental noise assessment: An innovative audio–visual approach

    SciTech Connect

    Ruotolo, Francesco; Maffei, Luigi; Di Gabriele, Maria; Iachini, Tina; Masullo, Massimiliano; Ruggiero, Gennaro; Senese, Vincenzo Paolo

    2013-07-15

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study

  13. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  14. Revealing Context-Specific Conditioned Fear Memories with Full Immersion Virtual Reality

    PubMed Central

    Huff, Nicole C.; Hernandez, Jose Alba; Fecteau, Matthew E.; Zielinski, David J.; Brady, Rachael; LaBar, Kevin S.

    2011-01-01

    The extinction of conditioned fear is known to be context-specific and is often considered more contextually bound than the fear memory itself (Bouton, 2004). Yet, recent findings in rodents have challenged the notion that contextual fear retention is initially generalized. The context-specificity of a cued fear memory to the learning context has not been addressed in the human literature largely due to limitations in methodology. Here we adapt a novel technology to test the context-specificity of cued fear conditioning using full immersion 3-D virtual reality (VR). During acquisition training, healthy participants navigated through virtual environments containing dynamic snake and spider conditioned stimuli (CSs), one of which was paired with electrical wrist stimulation. During a 24-h delayed retention test, one group returned to the same context as acquisition training whereas another group experienced the CSs in a novel context. Unconditioned stimulus expectancy ratings were assayed on-line during fear acquisition as an index of contingency awareness. Skin conductance responses time-locked to CS onset were the dependent measure of cued fear, and skin conductance levels during the interstimulus interval were an index of context fear. Findings indicate that early in acquisition training, participants express contingency awareness as well as differential contextual fear, whereas differential cued fear emerged later in acquisition. During the retention test, differential cued fear retention was enhanced in the group who returned to the same context as acquisition training relative to the context shift group. The results extend recent rodent work to illustrate differences in cued and context fear acquisition and the contextual specificity of recent fear memories. Findings support the use of full immersion VR as a novel tool in cognitive neuroscience to bridge rodent models of contextual phenomena underlying human clinical disorders. PMID:22069384

  15. Options in virtual 3D, optical-impression-based planning of dental implants.

    PubMed

    Reich, Sven; Kern, Thomas; Ritter, Lutz

    2014-01-01

    If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow. PMID:25098158

  16. Building virtual 3D bone fragment models to control diaphyseal fracture reduction

    NASA Astrophysics Data System (ADS)

    Leloup, Thierry; Schuind, Frederic; Lasudry, Nadine; Van Ham, Philippe

    1999-05-01

    Most fractures of the long bones are displaced and need to be surgically reduced. External fixation is often used but the crucial point of this technique is the control of reduction, which is effected with a brilliance amplifier. This system, giving instantly a x-ray image, has many disadvantages. It implies frequent irradiation to the patient and the surgical team, the visual field is limited, the supplied images are distorted and it only gives 2D information. Consequently, the reduction is occasionally imperfect although intraoperatively it appears acceptable. Using the pains inserted in each fragment as markers and an optical tracker, it is possible to build a virtual 3D model for each principal fragment and to follow its movement during the reduction. This system will supply a 3D image of the fracture in real time and without irradiation. The brilliance amplifier could then be replaced by such a virtual reality system to provide the surgeon with an accurate tool facilitating the reduction of the fracture. The purpose of this work is to show how to build the 3D model for each principal bone fragment.

  17. Avalanche for shape and feature-based virtual screening with 3D alignment.

    PubMed

    Diller, David J; Connell, Nancy D; Welsh, William J

    2015-11-01

    This report introduces a new ligand-based virtual screening tool called Avalanche that incorporates both shape- and feature-based comparison with three-dimensional (3D) alignment between the query molecule and test compounds residing in a chemical database. Avalanche proceeds in two steps. The first step is an extremely rapid shape/feature based comparison which is used to narrow the focus from potentially millions or billions of candidate molecules and conformations to a more manageable number that are then passed to the second step. The second step is a detailed yet still rapid 3D alignment of the remaining candidate conformations to the query conformation. Using the 3D alignment, these remaining candidate conformations are scored, re-ranked and presented to the user as the top hits for further visualization and evaluation. To provide further insight into the method, the results from two prospective virtual screens are presented which show the ability of Avalanche to identify hits from chemical databases that would likely be missed by common substructure-based or fingerprint-based search methods. The Avalanche method is extended to enable patent landscaping, i.e., structural refinements to improve the patentability of hits for deployment in drug discovery campaigns. PMID:26458937

  18. Exploring conformational search protocols for ligand-based virtual screening and 3-D QSAR modeling.

    PubMed

    Cappel, Daniel; Dixon, Steven L; Sherman, Woody; Duan, Jianxin

    2015-02-01

    3-D ligand conformations are required for most ligand-based drug design methods, such as pharmacophore modeling, shape-based screening, and 3-D QSAR model building. Many studies of conformational search methods have focused on the reproduction of crystal structures (i.e. bioactive conformations); however, for ligand-based modeling the key question is how to generate a ligand alignment that produces the best results for a given query molecule. In this work, we study different conformation generation modes of ConfGen and the impact on virtual screening (Shape Screening and e-Pharmacophore) and QSAR predictions (atom-based and field-based). In addition, we develop a new search method, called common scaffold alignment, that automatically detects the maximum common scaffold between each screening molecule and the query to ensure identical coordinates of the common core, thereby minimizing the noise introduced by analogous parts of the molecules. In general, we find that virtual screening results are relatively insensitive to the conformational search protocol; hence, a conformational search method that generates fewer conformations could be considered "better" because it is more computationally efficient for screening. However, for 3-D QSAR modeling we find that more thorough conformational sampling tends to produce better QSAR predictions. In addition, significant improvements in QSAR predictions are obtained with the common scaffold alignment protocol developed in this work, which focuses conformational sampling on parts of the molecules that are not part of the common scaffold. PMID:25408244

  19. A virtual interface for interactions with 3D models of the human body.

    PubMed

    De Paolis, Lucio T; Pulimeno, Marco; Aloisio, Giovanni

    2009-01-01

    The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen. PMID:19377116

  20. Elastic registration using 3D ChainMail: application to virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Daly, Barry; Shekhar, Raj

    2006-03-01

    We present an elastic registration algorithm based on local deformations modeled using cubic B-splines and controlled using 3D ChainMail. Our algorithm eliminates the appearance of folding artifacts and allows local rigidity and compressibility control independent of the image similarity metric being used. 3D ChainMail propagates large internal deformations between neighboring B-Spline control points, thereby preserving the topology of the transformed image without requiring the addition of penalty terms based on rigidity of the transformation field to the equation used to maximize image similarity. A novel application to virtual colonoscopy is presented where the algorithm is used to significantly improve cross-localization between colon locations in prone and supine CT images.

  1. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  2. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  3. Molecular surface point environments for virtual screening and the elucidation of binding patterns (MOLPRINT 3D).

    PubMed

    Bender, Andreas; Mussa, Hamse Y; Gill, Gurprem S; Glen, Robert C

    2004-12-16

    A novel method (MOLPRINT 3D) for virtual screening and the elucidation of ligand-receptor binding patterns is introduced that is based on environments of molecular surface points. The descriptor uses points relative to the molecular coordinates, thus it is translationally and rotationally invariant. Due to its local nature, conformational variations cause only minor changes in the descriptor. If surface point environments are combined with the Tanimoto coefficient and applied to virtual screening, they achieve retrieval rates comparable to that of two-dimensional (2D) fingerprints. The identification of active structures with minimal 2D similarity ("scaffold hopping") is facilitated. In combination with information-gain-based feature selection and a naive Bayesian classifier, information from multiple molecules can be combined and classification performance can be improved. Selected features are consistent with experimentally determined binding patterns. Examples are given for angiotensin-converting enzyme inhibitors, 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors, and thromboxane A2 antagonists. PMID:15588092

  4. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    NASA Astrophysics Data System (ADS)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  5. A hybrid Cartesian/immersed boundary method for simulating flows with 3D, geometrically complex, moving bodies

    NASA Astrophysics Data System (ADS)

    Gilmanov, Anvar; Sotiropoulos, Fotis

    2005-08-01

    A numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations in Cartesian domains containing immersed boundaries of arbitrary geometrical complexity moving with prescribed kinematics. The governing equations are discretized on a hybrid staggered/non-staggered grid layout using second-order accurate finite-difference formulas. The discrete equations are integrated in time via a second-order accurate dual-time-stepping, artificial compressibility iteration scheme. Unstructured, triangular meshes are employed to discretize complex immersed boundaries. The nodes of the surface mesh constitute a set of Lagrangian control points used to track the motion of the flexible body. At every instant in time, the influence of the body on the flow is accounted for by applying boundary conditions at Cartesian grid nodes located in the exterior but in the immediate vicinity of the body by reconstructing the solution along the local normal to the body surface. Grid convergence tests are carried out for the flow induced by an oscillating sphere in a cubic cavity, which show that the method is second-order accurate. The method is validated by applying it to calculate flow in a Cartesian domain containing a rigid sphere rotating at constant angular velocity as well as flow induced by a flapping wing. The ability of the method to simulate flows in domains with arbitrarily complex moving bodies is demonstrated by applying to simulate flow past an undulating fish-like body and flow past an anatomically realistic planktonic copepod performing an escape-like maneuver.

  6. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    PubMed

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  7. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  8. Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration

    NASA Astrophysics Data System (ADS)

    Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.

    2013-07-01

    This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D

  9. Instructors' Perceptions of Three-Dimensional (3D) Virtual Worlds: Instructional Use, Implementation and Benefits for Adult Learners

    ERIC Educational Resources Information Center

    Stone, Sophia Jeffries

    2009-01-01

    The purpose of this dissertation research study was to explore instructors' perceptions of the educational application of three-dimensional (3D) virtual worlds in a variety of academic discipline areas and to assess the strengths and limitations this virtual environment presents for teaching adult learners. The guiding research question for this…

  10. Using a Quest in a 3D Virtual Environment for Student Interaction and Vocabulary Acquisition in Foreign Language Learning

    ERIC Educational Resources Information Center

    Kastoudi, Denise

    2011-01-01

    The gaming and interactional nature of the virtual environment of Second Life offers opportunities for language learning beyond the traditional pedagogy. This study case examined the potential of 3D virtual quest games to enhance vocabulary acquisition through interaction, negotiation of meaning and noticing. Four adult students of English at…

  11. An Examination of the Effects of Collaborative Scientific Visualization via Model-Based Reasoning on Science, Technology, Engineering, and Mathematics (STEM) Learning within an Immersive 3D World

    ERIC Educational Resources Information Center

    Soleimani, Ali

    2013-01-01

    Immersive 3D worlds can be designed to effectively engage students in peer-to-peer collaborative learning activities, supported by scientific visualization, to help with understanding complex concepts associated with learning science, technology, engineering, and mathematics (STEM). Previous research studies have shown STEM learning benefits…

  12. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  13. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  14. Using Immersive Virtual Reality for Electrical Substation Training

    ERIC Educational Resources Information Center

    Tanaka, Eduardo H.; Paludo, Juliana A.; Cordeiro, Carlúcio S.; Domingues, Leonardo R.; Gadbem, Edgar V.; Euflausino, Adriana

    2015-01-01

    Usually, distribution electricians are called upon to solve technical problems found in electrical substations. In this project, we apply problem-based learning to a training program for electricians, with the help of a virtual reality environment that simulates a real substation. Using this virtual substation, users may safely practice maneuvers…

  15. Virtual Worlds; Real Learning: Design Principles for Engaging Immersive Environments

    NASA Technical Reports Server (NTRS)

    Wu (u. Sjarpm)

    2012-01-01

    The EMDT master's program at Full Sail University embarked on a small project to use a virtual environment to teach graduate students. The property used for this project has evolved our several iterations and has yielded some basic design principles and pedagogy for virtual spaces. As a result, students are emerging from the program with a better grasp of future possibilities.

  16. The Pixelated Professor: Faculty in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Blackmon, Stephanie

    2015-01-01

    Online environments, particularly virtual worlds, can sometimes complicate issues of self expression. For example, the faculty member who loves punk rock has an opportunity, through hairstyle and attire choices in the virtual world, to share that part of herself with students. However, deciding to share that part of the self can depend on a number…

  17. Teaching Literature in Virtual Worlds: Immersive Learning in English Studies

    ERIC Educational Resources Information Center

    Webb, Allen, Ed.

    2011-01-01

    What are the realities and possibilities of utilizing on-line virtual worlds as teaching tools for specific literary works? Through engaging and surprising stories from classrooms where virtual worlds are in use, this book invites readers to understand and participate in this emerging and valuable pedagogy. It examines the experience of high…

  18. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors

    PubMed Central

    Zhou, Nannan; Xu, Yuan; Liu, Xian; Wang, Yulan; Peng, Jianlong; Luo, Xiaomin; Zheng, Mingyue; Chen, Kaixian; Jiang, Hualiang

    2015-01-01

    The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR) signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR) model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor). Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors. PMID:26110383

  19. Design and fabrication of concave-convex lens for head mounted virtual reality 3D glasses

    NASA Astrophysics Data System (ADS)

    Deng, Zhaoyang; Cheng, Dewen; Hu, Yuan; Huang, Yifan; Wang, Yongtian

    2015-08-01

    As a kind of light-weighted and convenient tool to achieve stereoscopic vision, virtual reality glasses are gaining more popularity nowadays. For these glasses, molded plastic lenses are often adopted to handle both the imaging property and the cost of massive production. However, the as-built performance of the glass depends on both the optical design and the injection molding process, and maintaining the profile of the lens during injection molding process presents particular challenges. In this paper, optical design is combined with processing simulation analysis to obtain a design result suitable for injection molding. Based on the design and analysis results, different experiments are done using high-quality equipment to optimize the process parameters of injection molding. Finally, a single concave-convex lens is designed with a field-of-view of 90° for the virtual reality 3D glasses. The as-built profile error of the glass lens is controlled within 5μm, which indicates that the designed shape of the lens is fairly realized and the designed optical performance can thus be achieved.

  20. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  1. 3D modeling of the Strasbourg's Cathedral basements for interdisciplinary research and virtual visits

    NASA Astrophysics Data System (ADS)

    Landes, T.; Kuhnle, G.; Bruna, R.

    2015-08-01

    On the occasion of the millennium celebration of Strasbourg Cathedral, a transdisciplinary research group composed of archaeologists, surveyors, architects, art historians and a stonemason revised the 1966-1972 excavations under the St. Lawrence's Chapel of the Cathedral having remains of Roman and medieval masonry. The 3D modeling of the Chapel has been realized based on the combination of conventional surveying techniques for the network creation, laser scanning for the model creation and photogrammetric techniques for the texturing of a few parts. According to the requirements and the end-user of the model, the level of detail and level of accuracy have been adapted and assessed for every floor. The basement has been acquired and modeled with more details and a higher accuracy than the other parts. Thanks to this modeling work, archaeologists can confront their assumptions to those of other disciplines by simulating constructions of other worship edifices on the massive stones composing the basement. The virtual reconstructions provided evidence in support of these assumptions and served for communication via virtual visits.

  2. Evaluation of human behavior in collision avoidance: a study inside immersive virtual reality.

    PubMed

    Ouellette, Michel; Chagnon, Miguel; Faubert, Jocelyn

    2009-04-01

    During our daily displacements, we should consider the individuals advancing toward us in order to avoid a possible collision with our congeneric. We developed an experimental design in a virtual immersion room, which allows us to evaluate human capacities for avoiding collisions with other people. In addition, the design allows participants to interact naturally inside this immersive virtual reality setup when a pedestrian is moving toward them, creating a possible risk of collision. Results suggest that the performance is associated with visual and motor capacities and could be adjusted by cognitive social perception. PMID:19250010

  3. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    NASA Astrophysics Data System (ADS)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  4. Building virtual reality fMRI paradigms: a framework for presenting immersive virtual environments.

    PubMed

    Mueller, Charles; Luehrs, Michael; Baecke, Sebastian; Adolf, Daniela; Luetzkendorf, Ralf; Luchtmann, Michael; Bernarding, Johannes

    2012-08-15

    The advantage of using a virtual reality (VR) paradigm in fMRI is the possibility to interact with highly realistic environments. This extends the functions of standard fMRI paradigms, where the volunteer usually has a passive role, for example, watching a simple movie paradigm without any stimulus interactions. From that point of view the combined usage of VR and real-time fMRI offers great potential to identify underlying cognitive mechanisms such as spatial navigation, attention, semantic and episodic memory, as well as neurofeedback paradigms. However, the design and the implementation of a VR stimulus paradigm as well as the integration into an existing MR scanner framework are very complex processes. To support the modeling and usage of VR stimuli we developed and implemented a VR stimulus application based on C++. This software allows the fast and easy presentation of VR environments for fMRI studies without any additional expert knowledge. Furthermore, it provides for the reception of real-time data analysis values a bidirectional communication interface. In addition, the internal plugin interface enables users to extend the functionality of the software with custom programmed C++ plugins. The VR stimulus framework was tested in several performance tests and a spatial navigation study. According to the post-experimental interview, all subjects described immersive experiences and a high attentional load inside the artifical environment. Results from other VR spatial memory studies confirm the neuronal activation that was detected in parahippocampal areas, cuneus, and occipital regions. PMID:22759716

  5. Inclusion of Immersive Virtual Learning Environments and Visual Control Systems to Support the Learning of Students with Asperger Syndrome

    ERIC Educational Resources Information Center

    Lorenzo, Gonzalo; Pomares, Jorge; Lledo, Asuncion

    2013-01-01

    This paper presents the use of immersive virtual reality systems in the educational intervention with Asperger students. The starting points of this study are features of these students' cognitive style that requires an explicit teaching style supported by visual aids and highly structured environments. The proposed immersive virtual reality…

  6. Numerical simulation of X-wing type biplane flapping wings in 3D using the immersed boundary method.

    PubMed

    Tay, W B; van Oudheusden, B W; Bijl, H

    2014-09-01

    The numerical simulation of an insect-sized 'X-wing' type biplane flapping wing configuration is performed in 3D using an immersed boundary method solver at Reynolds numbers equal to 1000 (1 k) and 5 k, based on the wing's root chord length. This X-wing type flapping configuration draws its inspiration from Delfly, a bio-inspired ornithopter MAV which has two pairs of wings flapping in anti-phase in a biplane configuration. The objective of the present investigation is to assess the aerodynamic performance when the original Delfly flapping wing micro-aerial vehicle (FMAV) is reduced to the size of an insect. Results show that the X-wing configuration gives more than twice the average thrust compared with only flapping the upper pair of wings of the X-wing. However, the X-wing's average thrust is only 40% that of the upper wing flapping at twice the stroke angle. Despite this, the increased stability which results from the smaller lift and moment variation of the X-wing configuration makes it more suited for sharp image capture and recognition. These advantages make the X-wing configuration an attractive alternative design for insect-sized FMAVS compared to the single wing configuration. In the Reynolds number comparison, the vorticity iso-surface plot at a Reynolds number of 5 k revealed smaller, finer vortical structures compared to the simulation at 1 k, due to vortices' breakup. In comparison, the force output difference is much smaller between Re = 1 k and 5 k. Increasing the body inclination angle generates a uniform leading edge vortex instead of a conical one along the wingspan, giving higher lift. Understanding the force variation as the body inclination angle increases will allow FMAV designers to optimize the thrust and lift ratio for higher efficiency under different operational requirements. Lastly, increasing the spanwise flexibility of the wings increases the thrust slightly but decreases the efficiency. The thrust result is similar to one of the

  7. NanTroSEIZE in 3-D: Creating a Virtual Research Experience in Undergraduate Geoscience Courses

    NASA Astrophysics Data System (ADS)

    Reed, D. L.; Bangs, N. L.; Moore, G. F.; Tobin, H.

    2009-12-01

    Marine research programs, both large and small, have increasingly added a web-based component to facilitate outreach to K-12 and the public, in general. These efforts have included, among other activities, information-rich websites, ship-to-shore communication with scientists during expeditions, blogs at sea, clips on YouTube, and information about daily shipboard activities. Our objective was to leverage a portion of the vast collection of data acquired through the NSF-MARGINS program to create a learning tool with a long lifespan for use in undergraduate geoscience courses. We have developed a web-based virtual expedition, NanTroSEIZE in 3-D, based on a seismic survey associated with the NanTroSEIZE program of NSF-MARGINS and IODP to study the properties of the plate boundary fault system in the upper limit of the seismogenic zone off Japan. The virtual voyage can be used in undergraduate classes at anytime, since it is not directly tied to the finite duration of a specific seagoing project. The website combines text, graphics, audio and video to place learning in an experiential framework as students participate on the expedition and carry out research. Students learn about the scientific background of the program, especially the critical role of international collaboration, and meet the chief scientists before joining the sea-going expedition. Students are presented with the principles of 3-D seismic imaging, data processing and interpretation while mapping and identifying the active faults that were the likely sources of devastating earthquakes and tsunamis in Japan in 1944 and 1948. They also learn about IODP drilling that began in 2007 and will extend through much of the next decade. The website is being tested in undergraduate classes in fall 2009 and will be distributed through the NSF-MARGINS website (http://www.nsf-margins.org/) and the MARGINS Mini-lesson section of the Science Education Resource Center (SERC) (http

  8. The Utility of Using Immersive Virtual Environments for the Assessment of Science Inquiry Learning

    ERIC Educational Resources Information Center

    Code, Jillianne; Clarke-Midura, Jody; Zap, Nick; Dede, Chris

    2013-01-01

    Determining the effectiveness of any educational technology depends upon teachers' and learners' perception of the functional utility of that tool for teaching, learning, and assessment. The Virtual Performance project at Harvard University is developing and studying the feasibility of using immersive technology to develop performance…

  9. Children's Perception of Gap Affordances: Bicycling Across Traffic-Filled Intersections in an Immersive Virtual Environment

    ERIC Educational Resources Information Center

    Plumert, Jodie M.; Kearney, Joseph K.; Cremer, James F.

    2004-01-01

    This study examined gap choices and crossing behavior in children and adults using an immersive, interactive bicycling simulator. Ten- and 12-year-olds and adults rode a bicycle mounted on a stationary trainer through a virtual environment consisting of a street with 6 intersections. Participants faced continuous cross traffic traveling at 25mph…

  10. Correcting Distance Estimates by Interacting With Immersive Virtual Environments: Effects of Task and Available Sensory Information

    ERIC Educational Resources Information Center

    Waller, David; Richardson, Adam R.

    2008-01-01

    The tendency to underestimate egocentric distances in immersive virtual environments (VEs) is not well understood. However, previous research (A. R. Richardson & D. Waller, 2007) has demonstrated that a brief period of interaction with the VE prior to making distance judgments can effectively eliminate subsequent underestimation. Here the authors…

  11. Measuring Flow Experience in an Immersive Virtual Environment for Collaborative Learning

    ERIC Educational Resources Information Center

    van Schaik, P.; Martin, S.; Vallance, M.

    2012-01-01

    In contexts other than immersive virtual environments, theoretical and empirical work has identified flow experience as a major factor in learning and human-computer interaction. Flow is defined as a "holistic sensation that people feel when they act with total involvement". We applied the concept of flow to modeling the experience of…

  12. ARENA - A Collaborative Immersive Environment for Virtual Fieldwork

    NASA Astrophysics Data System (ADS)

    Kwasnitschka, T.

    2012-12-01

    Whenever a geoscientific study area is not readily accessible, as is the case on the deep seafloor, it is difficult to apply traditional but effective methods of fieldwork, which often require physical presence of the observer. The Artificial Research Environment for Networked Analysis (ARENA), developed at GEOMAR | Helmholtz Centre for Ocean Research Kiel within the Cluster of Excellence "The Future Ocean", provides a backend solution to robotic research on the seafloor by means of an immersive simulation environment for marine research: A hemispherical screen of 6m diameter covering the entire lower hemisphere surrounds a group of up to four researchers at once. A variety of open source (e.g. Microsoft Research World Wide Telescope) and commercial software platforms allow the interaction with e.g. in-situ recorded video, vector maps, terrain, textured geometry, point cloud and volumetric data in four dimensions. Data can be put into a holistic, georeferenced context and viewed on scales stretching from centimeters to global. Several input devices from joysticks to gestures and vocalized commands allow interaction with the simulation, depending on individual preference. Annotations added to the dataset during the simulation session catalyze the following quantitative evaluation. Both the special simulator design, making data perception a group experience, and the ability to connect remote instances or scaled down versions of ARENA over the Internet are significant advantages over established immersive simulation environments.

  13. Reconstruction and exploration of three-dimensional confocal microscopy data in an immersive virtual environment.

    PubMed

    Ai, Zhuming; Chen, Xue; Rasmussen, Mary; Folberg, Robert

    2005-07-01

    An immersive virtual environment for interactive three-dimensional reconstruction and exploration of confocal microscopy data is presented. For some structures automatic alignment of serial sections can lead to geometric distortions. The superior visual feedback of a Virtual Reality system is used to aid in registering and aligning serial sections interactively. An ImmersaDesk Virtual Reality display system is used for display and interaction with the volumetric confocal data. Detailed methods for handling both single-section and multi-section confocal data are described. PMID:15893451

  14. CamMedNP: Building the Cameroonian 3D structural natural products database for virtual screening

    PubMed Central

    2013-01-01

    Background Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. Description In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the “drug-likeness” of this database using Lipinski’s “Rule of Five”. A diversity analysis has been carried out in comparison with the ChemBridge diverse database. Conclusion CamMedNP could be highly useful for database screening and natural product lead generation programs. PMID:23590173

  15. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures. PMID:8793223

  16. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. PMID:25982719

  17. Workshop Report on Virtual Worlds and Immersive Environments

    NASA Technical Reports Server (NTRS)

    Langhoff, Stephanie R.; Cowan-Sharp, Jessy; Dodson, Karen E.; Damer, Bruce; Ketner, Bob

    2009-01-01

    The workshop revolved around three framing ideas or scenarios about the evolution of virtual environments: 1. Remote exploration: The ability to create high fidelity environments rendered from external data or models such that exploration, design and analysis that is truly interoperable with the physical world can take place within them. 2. We all get to go: The ability to engage anyone in being a part of or contributing to an experience (such as a space mission), no matter their training or location. It is the creation of a new paradigm for education, outreach, and the conduct of science in society that is truly participatory. 3. Become the data: A vision of a future where boundaries between the physical and the virtual have ceased to be meaningful. What would this future look like? Is this plausible? Is it desirable? Why and why not?

  18. Virtual immersion for post-stroke hand rehabilitation therapy.

    PubMed

    Tsoupikova, Daria; Stoykov, Nikolay S; Corrigan, Molly; Thielbar, Kelly; Vick, Randy; Li, Yu; Triandafilou, Kristen; Preuss, Fabian; Kamper, Derek

    2015-02-01

    Stroke is the leading cause of serious, long-term disability in the United States. Impairment of upper extremity function is a common outcome following stroke, often to the detriment of lifestyle and employment opportunities. While the upper extremity is a natural target for therapy, treatment may be hampered by limitations in baseline capability as lack of success may discourage arm and hand use. We developeda virtual reality (VR) system in order to encourage repetitive task practice. This system combined an assistive glove with a novel VR environment. A set of exercises for this system was developed to encourage specific movements. Six stroke survivors with chronic upper extremity hemiparesis volunteered to participate in a pilot study in which they completed 18 one-hour training sessions with the VR system. Performance with the system was recorded across the 18 training sessions. Clinical evaluations of motor control were conducted at three time points: prior to initiation of training, following the end of training, and 1 month later. Subjects displayed significant improvement on performance of the virtual tasks over the course of the training, although for the clinical outcome measures only lateral pinch showed significant improvement. Future expansion to multi-user virtual environments may extend the benefits of this system for stroke survivors with hemiparesis by furthering engagement in the rehabilitation exercises. PMID:25558845

  19. The Effect of 3D Virtual Learning Environment on Secondary School Third Grade Students' Attitudes toward Mathematics

    ERIC Educational Resources Information Center

    Simsek, Irfan

    2016-01-01

    With this research, in Second Life environment which is a three dimensional online virtual world, it is aimed to reveal the effects of student attitudes toward mathematics courses and design activities which will enable the third grade students of secondary school (primary education seventh grade) to see the 3D objects in mathematics courses in a…

  20. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by combining…

  1. A new high-aperture glycerol immersion objective lens and its application to 3D-fluorescence microscopy.

    PubMed

    Martini, N; Bewersdorf, J; Hell, S W

    2002-05-01

    High-resolution light microscopy of glycerol-mounted biological specimens is performed almost exclusively with oil immersion lenses. The reason is that the index of refraction of the oil and the cover slip of approximately 1.51 is close to that of approximately 1.45 of the glycerol mountant, so that refractive index mismatch-induced spherical aberrations are tolerable to some extent. Here we report the application of novel cover glass-corrected glycerol immersion lenses of high numerical aperture (NA) and the avoidance of these aberrations. The new lenses feature a semi-aperture angle of 68.5 degrees, which is slightly larger than that of the diffraction-limited 1.4 NA oil immersion lenses. The glycerol lenses are corrected for a quartz cover glass of 220 microm thickness and for a 80% glycerol-water immersion solution. Featuring an aberration correction collar, the lens can adapt to glycerol concentrations ranging between 72% and 88%, to slight variations of the temperature, and to the cover glass thickness. As the refractive index mismatch-induced aberrations are particularly important to quantitative confocal fluorescence microscopy, we investigated the axial sectioning ability and the axial chromatic aberrations in such a microscope as well as the image brightness as a function of the penetration depth. Whereas there is a significant decrease in image brightness associated with oil immersion, this decrease is absent with the glycerol immersion system. In addition, we show directly the compression of the optic axis in the case of oil immersion and its absence in the glycerol system. The unique advantages of these new lenses in high-resolution microscopy with two coherently used opposing lenses, such as 4 Pi-microscopy, are discussed. PMID:12000554

  2. Fusion of image and laser-scanning data in a large-scale 3D virtual environment

    NASA Astrophysics Data System (ADS)

    Shih, Jhih-Syuan; Lin, Ta-Te

    2013-05-01

    Construction of large-scale 3D virtual environment is important in many fields such as robotic navigation, urban planning, transportation, and remote sensing, etc. Laser scanning approach is the most common method used in constructing 3D models. This paper proposes an automatic method to fuse image and laser-scanning data in a large-scale 3D virtual environment. The system comprises a laser-scanning device installed on a robot platform and the software for data fusion and visualization. The algorithms of data fusion and scene integration are presented. Experiments were performed for the reconstruction of outdoor scenes to test and demonstrate the functionality of the system. We also discuss the efficacy of the system and technical problems involved in this proposed method.

  3. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  4. Novel 3D modeling methods for virtual fabrication and EDA compatible design of MEMS via parametric libraries

    NASA Astrophysics Data System (ADS)

    Schröpfer, Gerold; Lorenz, Gunar; Rouvillois, Stéphane; Breit, Stephen

    2010-06-01

    This paper provides a brief summary of the state-of-the-art of MEMS-specific modeling techniques and describes the validation of new models for a parametric component library. Two recently developed 3D modeling tools are described in more detail. The first one captures a methodology for designing MEMS devices and simulating them together with integrated electronics within a standard electronic design automation (EDA) environment. The MEMS designer can construct the MEMS model directly in a 3D view. The resulting 3D model differs from a typical feature-based 3D CAD modeling tool in that there is an underlying behavioral model and parametric layout associated with each MEMS component. The model of the complete MEMS device that is shared with the standard EDA environment can be fully parameterized with respect to manufacturing- and design-dependent variables. Another recent innovation is a process modeling tool that allows accurate and highly realistic visualization of the step-by-step creation of 3D micro-fabricated devices. The novelty of the tool lies in its use of voxels (3D pixels) rather than conventional 3D CAD techniques to represent the 3D geometry. Case studies for experimental devices are presented showing how the examination of these virtual prototypes can reveal design errors before mask tape out, support process development before actual fabrication and also enable failure analysis after manufacturing.

  5. InSPAL: A Novel Immersive Virtual Learning Programme.

    PubMed

    Byrne, Julia; Ip, Horace H S; Shuk-Ying Lau, Kate; Chen Li, Richard; Tso, Amy; Choi, Catherine

    2015-01-01

    In this paper we introduce The Interactive Sensory Program for Affective Learning (InSPAL) a pioneering virtual learning programme designed for the severely intellectually disabled (SID) students, who are having cognitive deficiencies and other sensory-motor handicaps, and thus need more help and attention in overcoming their learning difficulties. Through combining and integrating interactive media and virtual reality technology with the principles of art therapy and relevant pedagogical techniques, InSPAL aims to strengthen SID students' pre-learning abilities, promote their self-awareness, decrease behavioral interferences with learning as well as social interaction, enhance their communication and thus promote their quality of life. Results of our study show that students who went through our programme were more focused, and the ability to do things more independently increased by 15%. Moreover, 50% of the students showed a marked improvement in the ability to raise their hands in response, thus increasing their communication skills. The use of therapeutic interventions enabled a better control to the body, mind and emotions, resulting a greater performance and better participation. PMID:26799893

  6. 'My Virtual Dream': Collective Neurofeedback in an Immersive Art Environment.

    PubMed

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  7. Magnetic resonance virtual histology for embryos: 3D atlases for automated high-throughput phenotyping.

    PubMed

    Cleary, Jon O; Modat, Marc; Norris, Francesca C; Price, Anthony N; Jayakody, Sujatha A; Martinez-Barbera, Juan Pedro; Greene, Nicholas D E; Hawkes, David J; Ordidge, Roger J; Scambler, Peter J; Ourselin, Sebastien; Lythgoe, Mark F

    2011-01-15

    Ambitious international efforts are underway to produce gene-knockout mice for each of the 25,000 mouse genes, providing a new platform to study mammalian development and disease. Robust, large-scale methods for morphological assessment of prenatal mice will be essential to this work. Embryo phenotyping currently relies on histological techniques but these are not well suited to large volume screening. The qualitative nature of these approaches also limits the potential for detailed group analysis. Advances in non-invasive imaging techniques such as magnetic resonance imaging (MRI) may surmount these barriers. We present a high-throughput approach to generate detailed virtual histology of the whole embryo, combined with the novel use of a whole-embryo atlas for automated phenotypic assessment. Using individual 3D embryo MRI histology, we identified new pituitary phenotypes in Hesx1 mutant mice. Subsequently, we used advanced computational techniques to produce a whole-body embryo atlas from 6 CD-1 embryos, creating an average image with greatly enhanced anatomical detail, particularly in CNS structures. This methodology enabled unsupervised assessment of morphological differences between CD-1 embryos and Chd7 knockout mice (n=5 Chd7(+/+) and n=8 Chd7(+/-), C57BL/6 background). Using a new atlas generated from these three groups, quantitative organ volumes were automatically measured. We demonstrated a difference in mean brain volumes between Chd7(+/+) and Chd7(+/-) mice (42.0 vs. 39.1mm(3), p<0.05). Differences in whole-body, olfactory and normalised pituitary gland volumes were also found between CD-1 and Chd7(+/+) mice (C57BL/6 background). Our work demonstrates the feasibility of combining high-throughput embryo MRI with automated analysis techniques to distinguish novel mouse phenotypes. PMID:20656039

  8. Research on the key technologies of 3D spatial data organization and management for virtual building environments

    NASA Astrophysics Data System (ADS)

    Gong, Jun; Zhu, Qing

    2006-10-01

    As the special case of VGE in the fields of AEC (architecture, engineering and construction), Virtual Building Environment (VBE) has been broadly concerned. Highly complex, large-scale 3d spatial data is main bottleneck of VBE applications, so 3d spatial data organization and management certainly becomes the core technology for VBE. This paper puts forward 3d spatial data model for VBE, and the performance to implement it is very high. Inherent storage method of CAD data makes data redundant, and doesn't concern efficient visualization, which is a practical bottleneck to integrate CAD model, so An Efficient Method to Integrate CAD Model Data is put forward. Moreover, Since the 3d spatial indices based on R-tree are usually limited by their weakness of low efficiency due to the severe overlap of sibling nodes and the uneven size of nodes, a new node-choosing algorithm of R-tree are proposed.

  9. The Immersive Virtual Reality Experience: A Typology of Users Revealed Through Multiple Correspondence Analysis Combined with Cluster Analysis Technique.

    PubMed

    Rosa, Pedro J; Morais, Diogo; Gamito, Pedro; Oliveira, Jorge; Saraiva, Tomaz

    2016-03-01

    Immersive virtual reality is thought to be advantageous by leading to higher levels of presence. However, and despite users getting actively involved in immersive three-dimensional virtual environments that incorporate sound and motion, there are individual factors, such as age, video game knowledge, and the predisposition to immersion, that may be associated with the quality of virtual reality experience. Moreover, one particular concern for users engaged in immersive virtual reality environments (VREs) is the possibility of side effects, such as cybersickness. The literature suggests that at least 60% of virtual reality users report having felt symptoms of cybersickness, which reduces the quality of the virtual reality experience. The aim of this study was thus to profile the right user to be involved in a VRE through head-mounted display. To examine which user characteristics are associated with the most effective virtual reality experience (lower cybersickness), a multiple correspondence analysis combined with cluster analysis technique was performed. Results revealed three distinct profiles, showing that the PC gamer profile is more associated with higher levels of virtual reality effectiveness, that is, higher predisposition to be immersed and reduced cybersickness symptoms in the VRE than console gamer and nongamer. These findings can be a useful orientation in clinical practice and future research as they help identify which users are more predisposed to benefit from immersive VREs. PMID:26985781

  10. Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used

    MedlinePlus

    ... tech medical fields of biomedical visualization, computer graphics, virtual reality, and multimedia. The year was 1994. Kaufman's "two- ... organ, like the colon—and view it in virtual reality." Later, he and his team used it with ...

  11. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  12. Cultivating Imagination: Development and Pilot Test of a Therapeutic Use of an Immersive Virtual Reality CAVE

    PubMed Central

    Brennan, Patricia Flatley; Nicolalde, F. Daniel; Ponto, Kevin; Kinneberg, Megan; Freese, Vito; Paz, Dana

    2013-01-01

    As informatics applications grow from being data collection tools to platforms for action, the boundary between what constitutes informatics applications and therapeutic interventions begins to blur. Emerging computer-driven technologies such as virtual reality (VR) and mHealth apps may serve as clinical interventions. As part of a larger project intended to provide complements to cognitive behavioral approaches to health behavior change, an interactive scenario was designed to permit unstructured play inside an immersive 6-sided VR CAVE. In this pilot study we examined the technical and functional performance of the CAVE scenario, human tolerance of immersive CAVE experiences, and explored human imagination and the manner in which activity in the CAVE scenarios varied by an individual’s level of imagination. Nine adult volunteers participated in a pilot-and-feasibility study. Participants tolerated 15 minute long exposure to the scenarios, and navigated through the virtual world. Relationship between personal characteristics and behaviors are reported and explored. PMID:24551327

  13. The illusion of presence in immersive virtual reality during an fMRI brain scan.

    PubMed

    Hoffman, Hunter G; Richards, Todd; Coda, Barbara; Richards, Anne; Sharar, Sam R

    2003-04-01

    The essence of immersive virtual reality (VR) is the illusion it gives users that they are inside the computer-generated virtual environment. This unusually strong illusion is theorized to contribute to the successful pain reduction observed in burn patients who go into VR during woundcare (www.vrpain.com) and to successful VR exposure therapy for phobias and post-traumatic stress disorder (PTSD). The present study demonstrated for the first time that subjects could experience a strong illusion of presence during an fMRI despite the constraints of the fMRI magnet bore (i.e., immobilized head and loud ambient noise). PMID:12804024

  14. Immersive virtual reality as a rehabilitative technology for phantom limb experience: a protocol.

    PubMed

    Murray, Craig D; Patchick, Emma; Pettifer, Stephen; Caillette, Fabrice; Howard, Toby

    2006-04-01

    This paper describes a study protocol to investigate the use of immersive virtual reality as a treatment for amputees' phantom limb pain. This work builds upon prior research using mirror box therapy to induce vivid sensations of movement originating from the muscles and joints of amputees' phantom limbs. The present project transposes movements of amputees' anatomical limbs into movements of a virtual limb presented in the phenomenal space of their phantom limb. It is anticipated that the protocol described here will help reduce phantom limb pain. PMID:16640472

  15. Studying social interactions through immersive virtual environment technology: virtues, pitfalls, and future challenges

    PubMed Central

    Bombari, Dario; Schmid Mast, Marianne; Canadas, Elena; Bachmann, Manuel

    2015-01-01

    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants’ behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants’ height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother). PMID:26157414

  16. Immersive virtual reality platform for medical training: a "killer-application".

    PubMed

    2000-01-01

    The Medical Readiness Trainer (MRT) integrates fully immersive Virtual Reality (VR), highly advanced medical simulation technologies, and medical data to enable unprecedented medical education and training. The flexibility offered by the MRT environment serves as a practical teaching tool today and in the near future the will serve as an ideal vehicle for facilitating the transition to the next level of medical practice, i.e., telepresence and next generation Internet-based collaborative learning. PMID:10977542

  17. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  18. Effects of Exercise in Immersive Virtual Environments on Cortical Neural Oscillations and Mental State.

    PubMed

    Vogt, Tobias; Herpers, Rainer; Askew, Christopher D; Scherfgen, David; Strüder, Heiko K; Schneider, Stefan

    2015-01-01

    Virtual reality environments are increasingly being used to encourage individuals to exercise more regularly, including as part of treatment those with mental health or neurological disorders. The success of virtual environments likely depends on whether a sense of presence can be established, where participants become fully immersed in the virtual environment. Exposure to virtual environments is associated with physiological responses, including cortical activation changes. Whether the addition of a real exercise within a virtual environment alters sense of presence perception, or the accompanying physiological changes, is not known. In a randomized and controlled study design, moderate-intensity Exercise (i.e., self-paced cycling) and No-Exercise (i.e., automatic propulsion) trials were performed within three levels of virtual environment exposure. Each trial was 5 minutes in duration and was followed by posttrial assessments of heart rate, perceived sense of presence, EEG, and mental state. Changes in psychological strain and physical state were generally mirrored by neural activation patterns. Furthermore, these changes indicated that exercise augments the demands of virtual environment exposures and this likely contributed to an enhanced sense of presence. PMID:26366305

  19. A PC-based high-quality and interactive virtual endoscopy navigating system using 3D texture based volume rendering.

    PubMed

    Hwang, Jin-Woo; Lee, Jong-Min; Kim, In-Young; Song, In-Ho; Lee, Yong-Hee; Kim, SunI

    2003-05-01

    As an alternative method to optical endoscopy, visual quality and interactivity are crucial for virtual endoscopy. One solution is to use the 3D texture map based volume rendering method that offers high rendering speed without reducing visual quality. However, it is difficult to apply the method to virtual endoscopy. First, 3D texture mapping requires a high-end graphic workstation. Second, texture memory limits reduce the frame-rate. Third, lack of shading reduces visual quality significantly. As 3D texture mapping has become available on personal computers recently, we developed an interactive navigation system using 3D texture mapping on a personal computer. We divided the volume data into small cubes and tested whether the cubes had meaningful data. Only the cubes that passed the test were loaded into the texture memory and rendered. With the amount of data to be rendered minimized, rendering speed increased remarkably. We also improved visual quality by implementing full Phong shading based on the iso-surface shading method without sacrificing interactivity. With the developed navigation system, 256 x 256 x 256 sized brain MRA data was interactively explored with good image quality. PMID:12725966

  20. Crowd behaviour during high-stress evacuations in an immersive virtual environment.

    PubMed

    Moussaïd, Mehdi; Kapadia, Mubbasir; Thrash, Tyler; Sumner, Robert W; Gross, Markus; Helbing, Dirk; Hölscher, Christoph

    2016-09-01

    Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects. PMID:27605166

  1. A Methodology for Elaborating Activities for Higher Education in 3D Virtual Worlds

    ERIC Educational Resources Information Center

    Bravo, Javier; García-Magariño, Iván

    2015-01-01

    Distance education started being limited in comparison to traditional education. Distance teachers and educational organizations have overcome most of these limits, but some other limits still remain as challenges. One of these challenges is to collaboratively learn concepts in an immersive way, similarly to the education "in situ".…

  2. Virtually compliant: Immersive video gaming increases conformity to false computer judgments.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen; Sharma, Dinkar; Gonidis, Lazaros

    2015-08-01

    Real-life encounters with face-to-face contact are on the decline in a world in which many routine tasks are delegated to virtual characters-a development that bears both opportunities and risks. Interacting with such virtual-reality beings is particularly common during role-playing videogames, in which we incarnate into the virtual reality of an avatar. Video gaming is known to lead to the training and development of real-life skills and behaviors; hence, in the present study we sought to explore whether role-playing video gaming primes individuals' identification with a computer enough to increase computer-related social conformity. Following immersive video gaming, individuals were indeed more likely to give up their own best judgment and to follow the vote of computers, especially when the stimulus context was ambiguous. Implications for human-computer interactions and for our understanding of the formation of identity and self-concept are discussed. PMID:25585527

  3. A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

    PubMed Central

    Perez-Marcos, Daniel; Solazzi, Massimiliano; Steptoe, William; Oyekoya, Oyewole; Frisoli, Antonio; Weyrich, Tim; Steed, Anthony; Tecchia, Franco; Slater, Mel; Sanchez-Vives, Maria V.

    2012-01-01

    Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems. PMID:22787454

  4. Drumming in immersive virtual reality: the body shapes the way we play.

    PubMed

    Kilteni, Konstantina; Bergstrom, Ilias; Slater, Mel

    2013-04-01

    It has been shown that it is possible to generate perceptual illusions of ownership in immersive virtual reality (IVR) over a virtual body seen from first person perspective, in other words over a body that visually substitutes the person's real body. This can occur even when the virtual body is quite different in appearance from the person's real body. However, investigation of the psychological, behavioral and attitudinal consequences of such body transformations remains an interesting problem with much to be discovered. Thirty six Caucasian people participated in a between-groups experiment where they played a West-African Djembe hand drum while immersed in IVR and with a virtual body that substituted their own. The virtual hand drum was registered with a physical drum. They were alongside a virtual character that played a drum in a supporting, accompanying role. In a baseline condition participants were represented only by plainly shaded white hands, so that they were able merely to play. In the experimental condition they were represented either by a casually dressed dark-skinned virtual body (Casual Dark-Skinned - CD) or by a formal suited light-skinned body (Formal Light-Skinned - FL). Although participants of both groups experienced a strong body ownership illusion towards the virtual body, only those with the CD representation showed significant increases in their movement patterns for drumming compared to the baseline condition and compared with those embodied in the FL body. Moreover, the stronger the illusion of body ownership in the CD condition, the greater this behavioral change. A path analysis showed that the observed behavioral changes were a function of the strength of the illusion of body ownership towards the virtual body and its perceived appropriateness for the drumming task. These results demonstrate that full body ownership illusions can lead to substantial behavioral and possibly cognitive changes depending on the appearance of the virtual

  5. Applications and a three-dimensional desktop environment for an immersive virtual reality system

    NASA Astrophysics Data System (ADS)

    Kageyama, Akira; Masada, Youhei

    2013-08-01

    We developed an application launcher called Multiverse for scientific visualizations in a CAVE-type virtual reality (VR) system. Multiverse can be regarded as a type of three-dimensional (3D) desktop environment. In Multiverse, a user in a CAVE room can browse multiple visualization applications with 3D icons and explore movies that float in the air. Touching one of the movies causes "teleportation" into the application's VR space. After analyzing the simulation data using the application, the user can jump back into Multiverse's VR desktop environment in the CAVE.

  6. An integrated multidisciplinary re-evaluation of the geothermal system at Valles Caldera, New Mexico, using an immersive three-dimensional (3D) visualization environment

    NASA Astrophysics Data System (ADS)

    Fowler, A.; Bennett, S. E.; Wildgoose, M.; Cantwell, C.; Elliott, A. J.

    2012-12-01

    We describe an approach to explore the spatial relationships of a geothermal resource by examining diverse geological, geophysical, and geochemical data sets using the immersive 3-dimensional (3D) visualization capabilities of the UC Davis Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). The KeckCAVES is a facility where stereoscopic images are projected onto four, surfaces (three walls and a floor), which the user perceives as a seamless 3D image of the data. The user can manipulate and interact with the data, allowing a more intuitive interpretation of data set relationships than is possible with traditional 2-dimensional techniques. We incorporate multiple data sets of the geothermal system at Valles Caldera, New Mexico: topography, lithology, faults, temperature, alteration mineralogy, and magnetotellurics. With the ability to rapidly and intuitively observe data relationships, we are able to efficiently and rapidly draw conclusions about the subsurface architecture of the Valles Caldera geothermal system. We identify two high-temperature anomalies, one that corresponds with normal faults along the western caldera ring fracture, and one that with the resurgent dome. A cold-temperature anomaly identified adjacent to the resurgent dome high-temperature anomaly appears to relate to a fault controlled graben valley that acts as a recharge zone, likely funneling cold meteoric water into the subsurface along normal faults observed on published maps and cross sections. These high-temperature anomalies broadly correspond to subsurface regions where previous magnetotelluric studies have identified low apparent resistivity. Existing hot springs in the Sulfur Springs area correspond to the only location where our modeled 100°C isotherm intersects the ground surface. Correlation between the first occurrence of key alteration minerals (pyrite, chlorite, epidote) in previously drilled boreholes and our temperature model vary, with chlorite showing a

  7. Taking Science Online: Evaluating Presence and Immersion through a Laboratory Experience in a Virtual Learning Environment for Entomology Students

    ERIC Educational Resources Information Center

    Annetta, Leonard; Klesath, Marta; Meyer, John

    2009-01-01

    A 3-D virtual field trip was integrated into an online college entomology course and developed as a trial for the possible incorporation of future virtual environments to supplement online higher education laboratories. This article provides an explanation of the rationale behind creating the virtual experience, the Bug Farm; the method and…

  8. Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study

    PubMed Central

    Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-01-01

    Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real

  9. Immersed boundary Eulerian-Lagrangian 3D simulation of pyroclastic density currents: numerical scheme and experimental validation

    NASA Astrophysics Data System (ADS)

    Doronzo, Domenico Maria; de Tullio, Marco; Pascazio, Giuseppe; Dellino, Pierfrancesco

    2010-05-01

    Pyroclastic density currents are ground hugging, hot, gas-particle flows representing the most hazardous events of explosive volcanism. Their impact on structures is a function of dynamic pressure, which expresses the lateral load that such currents exert over buildings. In this paper we show how analog experiments can be matched with numerical simulations for capturing the essential physics of the multiphase flow. We used an immersed boundary scheme for the mesh generation, which helped in reconstructing the steep velocity and particle concentration gradients near the ground surface. Results show that the calculated values of dynamic pressure agree reasonably with the experimental measurements. These outcomes encourage future application of our method for the assessment of the impact of pyroclastic density currents at the natural scale.

  10. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    PubMed

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. PMID:22727689

  11. Automatic 360-deg profilometry of a 3D object using a shearing interferometer and virtual grating

    NASA Astrophysics Data System (ADS)

    Zhang, Yong-Lin; Bu, Guixue

    1996-10-01

    Phase measuring technique has been widely used in optical precision inspection for its extraordinary advantage. We use the phase-measuring technique and design a practical instrument for measuring 360 degrees profile of 3D object. A novel method that can realize profile detection with higher speed and lower cost is proposed. Phase unwrapping algorithm based on the second order differentiation is developed. A complete 3D shape is reconstructed from a series of line- section profiles corresponding to discrete angular position of the object. The profile-jointing procedure is only related with two fixed parameters and coordination transformation.

  12. Generating virtual textile composite specimens using statistical data from micro-computed tomography: 3D tow representations

    NASA Astrophysics Data System (ADS)

    Rinaldi, Renaud G.; Blacklock, Matthew; Bale, Hrishikesh; Begley, Matthew R.; Cox, Brian N.

    2012-08-01

    Recent work presented a Monte Carlo algorithm based on Markov Chain operators for generating replicas of textile composite specimens that possess the same statistical characteristics as specimens imaged using high resolution x-ray computed tomography. That work represented the textile reinforcement by one-dimensional tow loci in three-dimensional space, suitable for use in the Binary Model of textile composites. Here analogous algorithms are used to generate solid, three-dimensional (3D) tow representations, to provide geometrical models for more detailed failure analyses. The algorithms for generating 3D models are divided into those that refer to the topology of the textile and those that deal with its geometry. The topological rules carry all the information that distinguishes textiles with different interlacing patterns (weaves, braids, etc.) and provide instructions for resolving interpenetrations or ordering errors among tows. They also simplify writing a single computer program that can accept input data for generic textile cases. The geometrical rules adjust the shape and smoothness of the generated virtual specimens to match data from imaged specimens. The virtual specimen generator is illustrated using data for an angle interlock weave, a common 3D textile architecture.

  13. Proteopedia: A Collaborative, Virtual 3D Web-Resource for Protein and Biomolecule Structure and Function

    ERIC Educational Resources Information Center

    Hodis, Eran; Prilusky, Jaime, Sussman, Joel L.

    2010-01-01

    Protein structures are hard to represent on paper. They are large, complex, and three-dimensional (3D)--four-dimensional if conformational changes count! Unlike most of their substrates, which can easily be drawn out in full chemical formula, drawing every atom in a protein would usually be a mess. Simplifications like showing only the surface of…

  14. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus. PMID:24737097

  15. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  16. Virtually supportive: A feasibility pilot study of an online support group for dementia caregivers in a 3D virtual environment

    PubMed Central

    O’Connor, Mary-Frances; Arizmendi, Brian J.; Kaszniak, Alfred W.

    2014-01-01

    Caregiver support groups effectively reduce stress from caring for someone with dementia. These same demands can prevent participation in a group. The present feasibility study investigated a virtual online caregiver support group to bring the support group into the home. While online groups have been shown to be helpful, submissions to a message board (vs. live conversation) can feel impersonal. By using avatars, participants interacted via real-time chat in a virtual environment in an 8-week support group. Data indicated lower levels of perceived stress, depression and loneliness across participants. Importantly, satisfaction reports also indicate that caregivers overcame the barriers to participation, and had a strong sense of the group’s presence. This study provides the framework for an accessible and low cost online support group for a dementia caregiver. The study demonstrates the feasibility of interactive group in a virtual environment for engaging members in meaningful interaction. PMID:24984911

  17. Identification of potential influenza virus endonuclease inhibitors through virtual screening based on the 3D-QSAR model.

    PubMed

    Kim, J; Lee, C; Chong, Y

    2009-01-01

    Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates. PMID:19343586

  18. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  19. Virtual Presence and the Mind's Eye in 3-D Online Communities

    NASA Astrophysics Data System (ADS)

    Beacham, R. C.; Denard, H.; Baker, D.

    2011-09-01

    Digital technologies have introduced fundamental changes in the forms, content, and media of communication. Indeed, some have suggested we are in the early stages of a seismic shift comparable to that in antiquity with the transition from a primarily oral culture to one based upon writing. The digital transformation is rapidly displacing the long-standing hegemony of text, and restoring in part social, bodily, oral and spatial elements, but in radically reconfigured forms and formats. Contributing to and drawing upon such changes and possibilities, scholars and those responsible for sites preserving or displaying cultural heritage, have undertaken projects to explore the properties and potential of the online communities enabled by "Virtual Worlds" and related platforms for teaching, collaboration, publication, and new modes of disciplinary research. Others, keenly observing and evaluating such work, are poised to contribute to it. It is crucial that leadership be provided to ensure that serious and sustained investigation be undertaken by scholars who have experience, and achievements, in more traditional forms of research, and who perceive the emerging potential of Virtual World work to advance their investigations. The Virtual Museums Transnational Network will seek to engage such scholars and provide leadership in this emerging and immensely attractive new area of cultural heritage exploration and experience. This presentation reviews examples of the current "state of the art" in heritage based Virtual World initiatives, looking at the new modes of social interaction and experience enabled by such online communities, and some of the achievements and future aspirations of this work.

  20. "The Evolution of e-Learning in the Context of 3D Virtual Worlds"

    ERIC Educational Resources Information Center

    Kotsilieris, Theodore; Dimopoulou, Nikoletta

    2013-01-01

    Information and Communication Technologies (ICT) offer new approaches towards knowledge acquisition and collaboration through distance learning processes. Web-based Learning Management Systems (LMS) have transformed the way that education is conducted nowadays. At the same time, the adoption of Virtual Worlds in the educational process is of great…

  1. Collaboration and Knowledge Sharing Using 3D Virtual World on "Second Life"

    ERIC Educational Resources Information Center

    Rahim, Noor Faridah A.

    2013-01-01

    A collaborative and knowledge sharing virtual activity on "Second Life" using a learner-centred teaching methodology was initiated between Temasek Polytechnic and The Hong Kong Polytechnic University (HK PolyU) in the October 2011 semester. This paper highlights the author's experience in designing and implementing this e-learning…

  2. Determinants of Presence in 3D Virtual Worlds: A Structural Equation Modelling Analysis

    ERIC Educational Resources Information Center

    Chow, Meyrick

    2016-01-01

    There is a growing body of evidence that feeling present in virtual environments contributes to effective learning. Presence is a psychological state of the user; hence, it is generally agreed that individual differences in user characteristics can lead to different experiences of presence. Despite the fact that user characteristics can play a…

  3. The Use of 3D Virtual Learning Environments in Training Foreign Language Pre-Service Teachers

    ERIC Educational Resources Information Center

    Can, Tuncer; Simsek, Irfan

    2015-01-01

    The recent developments in computer and Internet technologies and in three dimensional modelling necessitates the new approaches and methods in the education field and brings new opportunities to the higher education. The Internet and virtual learning environments have changed the learning opportunities by diversifying the learning options not…

  4. Towards a Transcription System of Sign Language for 3D Virtual Agents

    NASA Astrophysics Data System (ADS)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  5. Active Learning through the Use of Virtual Environments

    ERIC Educational Resources Information Center

    Mayrose, James

    2012-01-01

    Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…

  6. Design and application of a virtual reality 3D engine based on rapid indices

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Mai, Jin

    2007-06-01

    This article proposes a data structure of a 3D engine based on rapid indices. Taking a model for a construction unit, this data structure can construct a coordinate array with 3D vertex rapidly and arrange those vertices in a sequence of triangle strips or triangle fans, which can be rendered rapidly by OpenGL. This data structure is easy to extend. It can hold texture coordinates, normal coordinates of vertices and a model matrix. Other models can be added to it, deleted from it, or transformed by model matrix, so it is flexible. This data structure also improves the render speed of OpenGL when it holds a large amount of data.

  7. Virtual touch 3D interactive system for autostereoscopic display with embedded optical sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Wang, Guo-Zhen; Ma, Ming-Ching; Tung, Shang-Yu; Huang, Shu-Yi; Tseng, Hung-Wei; Kuo, Chung-Hong; Li, Chun-Huai

    2011-06-01

    The traidational 3D interactive sysetm which uses CCD camera to capture image is difficult to operate on near range for mobile applications.Therefore, 3D interactive display with embedded optical sensor was proposed. Based on optical sensor based system, we proposed four different methods to support differenct functions. T mark algorithm can obtain 5- axis information (x, y, z,θ, and φ)of LED no matter where LED was vertical or inclined to panel and whatever it rotated. Sequential mark algorithm and color filter based algorithm can support mulit-user. Finally, bare finger touch system with sequential illuminator can achieve to interact with auto-stereoscopic images by bare finger. Furthermore, the proposed methods were verified on a 4-inch panel with embedded optical sensors.

  8. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications.

    PubMed

    Smith, Jordan W

    2015-09-01

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings. PMID:26378565

  9. Immersive Virtual Environment Technology to Supplement Environmental Perception, Preference and Behavior Research: A Review with Applications

    PubMed Central

    Smith, Jordan W.

    2015-01-01

    Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings. PMID:26378565

  10. "Active" and "passive" learning of three-dimensional object structure within an immersive virtual reality environment.

    PubMed

    James, K H; Humphrey, G K; Vilis, T; Corrie, B; Baddour, R; Goodale, M A

    2002-08-01

    We used a fully immersive virtual reality environment to study whether actively interacting with objects would effect subsequent recognition, when compared with passively observing the same objects. We found that when participants learned object structure by actively rotating the objects, the objects were recognized faster during a subsequent recognition task than when object structure was learned through passive observation. We also found that participants focused their study time during active exploration on a limited number of object views, while ignoring other views. Overall, our results suggest that allowing active exploration of an object during initial learning can facilitate recognition of that object, perhaps owing to the control that the participant has over the object views upon which they can focus. The virtual reality environment is ideal for studying such processes, allowing realistic interaction with objects while maintaining experimenter control. PMID:12395554

  11. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  12. Scaffold hopping through virtual screening using 2D and 3D similarity descriptors: ranking, voting, and consensus scoring.

    PubMed

    Zhang, Qiang; Muegge, Ingo

    2006-03-01

    The ability to find novel bioactive scaffolds in compound similarity-based virtual screening experiments has been studied comparing Tanimoto-based, ranking-based, voting, and consensus scoring protocols. Ligand sets for seven well-known drug targets (CDK2, COX2, estrogen receptor, neuraminidase, HIV-1 protease, p38 MAP kinase, thrombin) have been assembled such that each ligand represents its own unique chemotype, thus ensuring that each similarity recognition event between ligands constitutes a scaffold hopping event. In a series of virtual screening studies involving 9969 MDDR compounds as negative controls it has been found that atom pair descriptors and 3D pharmacophore fingerprints combined with ranking, voting, and consensus scoring strategies perform well in finding novel bioactive scaffolds. In addition, often superior performance has been observed for similarity-based virtual screening compared to structure-based methods. This finding suggests that information about a target obtained from known bioactive ligands is as valuable as knowledge of the target structures for identifying novel bioactive scaffolds through virtual screening. PMID:16509572

  13. Combining Immersive Virtual Worlds and Virtual Learning Environments into an Integrated System for Hosting and Supporting Virtual Conferences

    NASA Astrophysics Data System (ADS)

    Polychronis, Nikolaos; Patrikakis, Charalampos; Voulodimos, Athanasios

    In this paper, a proposal for hosting and supporting virtual conferences based on the use of state of the art web technologies and computer mediated education software is presented. The proposed system consists of a virtual conference venue hosted in Second Life platform, targeted at hosting synchronous conference sessions, and of a web space created with the use of the e-learning platform Moodle, targeted at serving the needs of asynchronous communication, as well as user and content management. The use of Sloodle (the next generation of Moodle software incorporating virtual world supporting capabilities), which up to now has been used only in traditional education, enables the combination of the virtual conference venue and the conference supporting site into an integrated system that allows for the conduction of successful and cost-effective virtual conferences.

  14. An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation.

    PubMed

    Seraglia, Bruno; Gamberini, Luciano; Priftis, Konstantinos; Scatturin, Pietro; Martinelli, Massimiliano; Cutini, Simone

    2011-01-01

    For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed. PMID:22207843

  15. An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation

    PubMed Central

    Seraglia, Bruno; Gamberini, Luciano; Priftis, Konstantinos; Scatturin, Pietro; Martinelli, Massimiliano; Cutini, Simone

    2011-01-01

    For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed. PMID:22207843

  16. Individual reactions to a multisensory immersive virtual environment: the impact of a wind farm on individuals.

    PubMed

    Ruotolo, Francesco; Senese, Vincenzo Paolo; Ruggiero, Gennaro; Maffei, Luigi; Masullo, Massimiliano; Iachini, Tina

    2012-08-01

    The aim of this study was to assess the impact of a wind farm on individuals by means of an audio-visual methodology that tried to simulate biologically plausible individual-environment interactions. To disentangle the effects of auditory and visual components on cognitive performances and subjective evaluations, unimodal (Audio or Video) and bimodal (Audio + Video) approaches were compared. Participants were assigned to three experimental conditions that reproduced a wind farm by means of an immersive virtual reality system: bimodal condition, reproducing scenarios with both acoustic and visual stimuli; unimodal visual condition, with only visual stimuli; unimodal auditory condition, with only auditory stimuli. While immersed in the virtual scenarios, participants performed tasks assessing verbal fluency, short-term verbal memory, backward counting, and distance estimations (egocentric: how far is the turbine from you?; allocentric: how far is the turbine from the target?). Afterwards, participants reported their degree of visual and noise annoyance. The results revealed that the presence of a visual scenario as compared to the only availability of auditory stimuli may exert a negative effect on resource-demanding cognitive tasks but a positive effect on perceived noise annoyance. This supports the idea that humans perceive the environment holistically and that auditory and visual features are processed in close interaction. PMID:22806673

  17. Web-Based Immersive Virtual Patient Simulators: Positive Effect on Clinical Reasoning in Medical Education

    PubMed Central

    Heiermann, Nadine; Plum, Patrick Sven; Wahba, Roger; Chang, De-Hua; Maus, Martin; Chon, Seung-Hun; Hoelscher, Arnulf H; Stippel, Dirk Ludger

    2015-01-01

    Background Clinical reasoning is based on the declarative and procedural knowledge of workflows in clinical medicine. Educational approaches such as problem-based learning or mannequin simulators support learning of procedural knowledge. Immersive patient simulators (IPSs) go one step further as they allow an illusionary immersion into a synthetic world. Students can freely navigate an avatar through a three-dimensional environment, interact with the virtual surroundings, and treat virtual patients. By playful learning with IPS, medical workflows can be repetitively trained and internalized. As there are only a few university-driven IPS with a profound amount of medical knowledge available, we developed a university-based IPS framework. Our simulator is free to use and combines a high degree of immersion with in-depth medical content. By adding disease-specific content modules, the simulator framework can be expanded depending on the curricular demands. However, these new educational tools compete with the traditional teaching Objective It was our aim to develop an educational content module that teaches clinical and therapeutic workflows in surgical oncology. Furthermore, we wanted to examine how the use of this module affects student performance. Methods The new module was based on the declarative and procedural learning targets of the official German medical examination regulations. The module was added to our custom-made IPS named ALICE (Artificial Learning Interface for Clinical Education). ALICE was evaluated on 62 third-year students. Results Students showed a high degree of motivation when using the simulator as most of them had fun using it. ALICE showed positive impact on clinical reasoning as there was a significant improvement in determining the correct therapy after using the simulator. ALICE positively impacted the rise in declarative knowledge as there was improvement in answering multiple-choice questions before and after simulator use. Conclusions

  18. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  19. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  20. Comparative brain morphology of Neotropical parrots (Aves, Psittaciformes) inferred from virtual 3D endocasts.

    PubMed

    Carril, Julieta; Tambussi, Claudia Patricia; Degrange, Federico Javier; Benitez Saldivar, María Juliana; Picasso, Mariana Beatriz Julieta

    2016-08-01

    Psittaciformes are a very diverse group of non-passerine birds, with advanced cognitive abilities and highly developed locomotor and feeding behaviours. Using computed tomography and three-dimensional (3D) visualization software, the endocasts of 14 extant Neotropical parrots were reconstructed, with the aim of analysing, comparing and exploring the morphology of the brain within the clade. A 3D geomorphometric analysis was performed, and the encephalization quotient (EQ) was calculated. Brain morphology character states were traced onto a Psittaciformes tree in order to facilitate interpretation of morphological traits in a phylogenetic context. Our results indicate that: (i) there are two conspicuously distinct brain morphologies, one considered walnut type (quadrangular and wider than long) and the other rounded (narrower and rostrally tapered); (ii) Psittaciformes possess a noticeable notch between hemisphaeria that divides the bulbus olfactorius; (iii) the plesiomorphic and most frequently observed characteristics of Neotropical parrots are a rostrally tapered telencephalon in dorsal view, distinctly enlarged dorsal expansion of the eminentia sagittalis and conspicuous fissura mediana; (iv) there is a positive correlation between body mass and brain volume; (v) psittacids are characterized by high EQ values that suggest high brain volumes in relation to their body masses; and (vi) the endocranial morphology of the Psittaciformes as a whole is distinctive relative to other birds. This new knowledge of brain morphology offers much potential for further insight in paleoneurological, phylogenetic and evolutionary studies. PMID:26053196

  1. Effects of 3D virtual haptics force feedback on brand personality perception: the mediating role of physical presence in advergames.

    PubMed

    Jin, Seung-A Annie

    2010-06-01

    This study gauged the effects of force feedback in the Novint Falcon haptics system on the sensory and cognitive dimensions of a virtual test-driving experience. First, in order to explore the effects of tactile stimuli with force feedback on users' sensory experience, feelings of physical presence (the extent to which virtual physical objects are experienced as actual physical objects) were measured after participants used the haptics interface. Second, to evaluate the effects of force feedback on the cognitive dimension of consumers' virtual experience, this study investigated brand personality perception. The experiment utilized the Novint Falcon haptics controller to induce immersive virtual test-driving through tactile stimuli. The author designed a two-group (haptics stimuli with force feedback versus no force feedback) comparison experiment (N = 238) by manipulating the level of force feedback. Users in the force feedback condition were exposed to tactile stimuli involving various force feedback effects (e.g., terrain effects, acceleration, and lateral forces) while test-driving a rally car. In contrast, users in the control condition test-drove the rally car using the Novint Falcon but were not given any force feedback. Results of ANOVAs indicated that (a) users exposed to force feedback felt stronger physical presence than those in the no force feedback condition, and (b) users exposed to haptics stimuli with force feedback perceived the brand personality of the car to be more rugged than those in the control condition. Managerial implications of the study for product trial in the business world are discussed. PMID:20557250

  2. A 3D immersed finite element method with non-homogeneous interface flux jump for applications in particle-in-cell simulations of plasma-lunar surface interactions

    NASA Astrophysics Data System (ADS)

    Han, Daoru; Wang, Pu; He, Xiaoming; Lin, Tao; Wang, Joseph

    2016-09-01

    Motivated by the need to handle complex boundary conditions efficiently and accurately in particle-in-cell (PIC) simulations, this paper presents a three-dimensional (3D) linear immersed finite element (IFE) method with non-homogeneous flux jump conditions for solving electrostatic field involving complex boundary conditions using structured meshes independent of the interface. This method treats an object boundary as part of the simulation domain and solves the electric field at the boundary as an interface problem. In order to resolve charging on a dielectric surface, a new 3D linear IFE basis function is designed for each interface element to capture the electric field jump on the interface. Numerical experiments are provided to demonstrate the optimal convergence rates in L2 and H1 norms of the IFE solution. This new IFE method is integrated into a PIC method for simulations involving charging of a complex dielectric surface in a plasma. A numerical study of plasma-surface interactions at the lunar terminator is presented to demonstrate the applicability of the new method.

  3. Making Web3D Less Scary: Toward Easy-to-Use Web3D e-Learning Content Development Tools for Educators

    ERIC Educational Resources Information Center

    de Byl, Penny

    2009-01-01

    Penny de Byl argues that one of the biggest challenges facing educators today is the integration of rich and immersive three-dimensional environments with existing teaching and learning materials. To empower educators with the ability to embrace emerging Web3D technologies, the Advanced Learning and Immersive Virtual Environment (ALIVE) research…

  4. a Hand-Free Solution for the Interaction in AN Immersive Virtual Environment: the Case of the Agora of Segesta

    NASA Astrophysics Data System (ADS)

    Olivito, R.; Taccola, E.; Albertini, N.

    2015-02-01

    The paper illustrates the project of an interdisciplinary team composed of archaeologists and researchers of the Scuola Normale Superiore and the University of Pisa. The synergy between these Centres has recently allowed for a more articulated 3D simulation of the agora of Segesta. Here, the archaeological excavations have brought to light the remains of a huge public building (stoa) of the Late-Hellenistic Period. Computer graphics and image-based modeling have been used to monitor, document and record the different phases of the excavation activity (layers, findings, wall structures) and to create a 3D model of the whole site. In order to increase as much as possible the level of interaction, all the models can be managed by an application specially designed for an immersive virtual environment (CAVE-like system). By using hands tracking sensor (Leap) in a non-standard way, the application allows for a completely hand-free interaction with the simulation of the agora of Segesta and the different phases of the fieldwork activities. More specifically, the operator can use simple hand gestures to activate a natural interface, scroll and visualize the perfectly overlapped models of the archaeological layers, pop up the models of single meaningful objects discovered during the excavation, and obtain all the relative metadata (stored in a dedicated server) which are visualizable on external devices (e.g. tablets or monitors) without further wearable devices. All these functions are contextualized within the whole simulation of the agora, so that it is possible to verify old interpretations and enhance new ones in real-time, simulating within the CAVE the whole archaeological investigation, going over the different phases of the excavation in a more rapid way, getting information which could have been ignored during the fieldwork, and verifying, even ex-post, issues not correctly documented during the fieldwork. The opportunity to physically interact with the 3D model

  5. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  6. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction. PMID:25122851

  7. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    NASA Astrophysics Data System (ADS)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  8. Level of Immersion in Virtual Environments Impacts the Ability to Assess and Teach Social Skills in Autism Spectrum Disorder

    PubMed Central

    Bugnariu, Nicoleta L.

    2016-01-01

    Abstract Virtual environments (VEs) may be useful for delivering social skills interventions to individuals with autism spectrum disorder (ASD). Immersive VEs provide opportunities for individuals with ASD to learn and practice skills in a controlled replicable setting. However, not all VEs are delivered using the same technology, and the level of immersion differs across settings. We group studies into low-, moderate-, and high-immersion categories by examining five aspects of immersion. In doing so, we draw conclusions regarding the influence of this technical manipulation on the efficacy of VEs as a tool for assessing and teaching social skills. We also highlight ways in which future studies can advance our understanding of how manipulating aspects of immersion may impact intervention success. PMID:26919157

  9. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  10. Enhancing Scientific Collaboration, Transparency, and Public Access: Utilizing the Second Life Platform to Convene a Scientific Conference in 3-D Virtual Space

    NASA Astrophysics Data System (ADS)

    McGee, B. W.

    2006-12-01

    Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an

  11. Inspiring Equal Contribution and Opportunity in a 3D Multi-User Virtual Environment: Bringing Together Men Gamers and Women Non-Gamers in Second Life[R

    ERIC Educational Resources Information Center

    deNoyelles, Aimee; Seo, Kay Kyeong-Ju

    2012-01-01

    A 3D multi-user virtual environment holds promise to support and enhance student online learning communities due to its ability to promote global synchronous interaction and collaboration, rich multisensory experience and expression, and elaborate design capabilities. Second Life[R], a multi-user virtual environment intended for adult users 18 and…

  12. Cross-Cultural Discussions in a 3D Virtual Environment and Their Affordances for Learners' Motivation and Foreign Language Discussion Skills

    ERIC Educational Resources Information Center

    Jauregi, Kristi; Kuure, Leena; Bastian, Pim; Reinhardt, Dennis; Koivisto, Tuomo

    2015-01-01

    Within the European TILA project a case study was carried out where pupils from schools in Finland and the Netherlands engaged in debating sessions using the 3D virtual world of OpenSim once a week for a period of 5 weeks. The case study had two main objectives: (1) to study the impact that the discussion tasks undertaken in a virtual environment…

  13. The Complete Virtual 3d Reconstruction of the East Pediment of the Temple of ZEUS at Olympia

    NASA Astrophysics Data System (ADS)

    Patay-Horváth, A.

    2011-09-01

    The arrangement of the five central figures of the east pediment of the temple of Zeus at Olympia has been the subject of scholarly debates since the discovery of the fragments more than a century ago. In theory, there are four substantially different arrangements, all of which have already been selected by certain scholars for various aesthetic, technical and other considerations. The present project tries to approach this controversy in a new way, by producing a virtual 3D reconstruction of the group. Digital models of the statues were produced by scanning the original fragments and by reconstructing them virtually. For this purpose an innovative new software (Leonar3Do) has also been employed. The virtual model of the pediment surrounding the sculptures was prepared on the basis of the latest architectural studies and afterwards the reconstructed models were inserted in this frame, in order to test the technical feasibility and aesthetic effects the four possible arrangements. The paper gives an overview of the entire work and presents the final results suggesting that two arrangements can be ruled out due to the limited space available in the pediment.

  14. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments

    PubMed Central

    Slater, Mel

    2009-01-01

    In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is ‘being there’, often called ‘presence’, the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not ‘there’ and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality. PMID:19884149

  15. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  16. iVFTs - immersive virtual field trips for interactive learning about Earth's environment.

    NASA Astrophysics Data System (ADS)

    Bruce, G.; Anbar, A. D.; Semken, S. C.; Summons, R. E.; Oliver, C.; Buxner, S.

    2014-12-01

    Innovations in immersive interactive technologies are changing the way students explore Earth and its environment. State-of-the-art hardware has given developers the tools needed to capture high-resolution spherical content, 360° panoramic video, giga-pixel imagery, and unique viewpoints via unmanned aerial vehicles as they explore remote and physically challenging regions of our planet. Advanced software enables integration of these data into seamless, dynamic, immersive, interactive, content-rich, and learner-driven virtual field explorations, experienced online via HTML5. These surpass conventional online exercises that use 2-D static imagery and enable the student to engage in these virtual environments that are more like games than like lectures. Grounded in the active learning of exploration, inquiry, and application of knowledge as it is acquired, users interact non-linearly in conjunction with an intelligent tutoring system (ITS). The integration of this system allows the educational experience to be adapted to each individual student as they interact within the program. Such explorations, which we term "immersive virtual field trips" (iVFTs), are being integrated into cyber-learning allowing science teachers to take students to scientifically significant but inaccessible environments. Our team and collaborators are producing a diverse suite of freely accessible, iVFTs to teach key concepts in geology, astrobiology, ecology, and anthropology. Topics include Early Life, Biodiversity, Impact craters, Photosynthesis, Geologic Time, Stratigraphy, Tectonics, Volcanism, Surface Processes, The Rise of Oxygen, Origin of Water, Early Civilizations, Early Multicellular Organisms, and Bioarcheology. These diverse topics allow students to experience field sites all over the world, including, Grand Canyon (USA), Flinders Ranges (Australia), Shark Bay (Australia), Rainforests (Panama), Teotihuacan (Mexico), Upheaval Dome (USA), Pilbara (Australia), Mid-Atlantic Ridge

  17. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. PMID:27590974

  18. Behavioral compliance for dynamic versus static signs in an immersive virtual environment.

    PubMed

    Duarte, Emília; Rebelo, Francisco; Teles, Júlia; Wogalter, Michael S

    2014-09-01

    This study used an immersive virtual environment (IVE) to examine how dynamic features in signage affect behavioral compliance during a work-related task and an emergency egress. Ninety participants performed a work-related task followed by an emergency egress. Compliance with uncued and cued safety signs was assessed prior to an explosion/fire involving egress with exit signs. Although dynamic presentation produced the highest compliance, the difference between dynamic and static presentation was only statistically significant for uncued signs. Uncued signs, both static and dynamic, were effective in changing behavior compared to no/minimal signs. Findings are explained based on sign salience and on task differences. If signs must capture attention while individuals are attending to other tasks, salient (e.g., dynamic) signs are useful in benefiting compliance. This study demonstrates the potential for IVEs to serve as a useful tool in behavioral compliance research. PMID:24210840

  19. Scene-Motion Thresholds During Head Yaw for Immersive Virtual Environments

    PubMed Central

    Jerald, Jason; Whitton, Mary; Brooks, Frederick P.

    2014-01-01

    In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene illumination levels. We found that across various conditions 1) thresholds are greater when the scene moves with head yaw (corresponding to gain < 1:0) than when the scene moves against head yaw (corresponding to gain > 1:0), and 2) thresholds increase as head motion increases. PMID:25705137

  20. Global Warming and the Arctic in 3D: A Virtual Globe for Outreach

    NASA Astrophysics Data System (ADS)

    Manley, W. F.

    2006-12-01

    Virtual Globes provide a new way to capture and inform the public's interest in environmental change. As an example, a recent Google Earth presentation conveyed 'key findings' from the Arctic Climate Impact Assessment (ACIA, 2004) to middle school students during the 2006 INSTAAR/NSIDC Open House at the University of Colorado. The 20-minute demonstration to 180 eighth graders began with an introduction and a view of the Arctic from space, zooming into the North American Arctic, then to a placemark for the first key finding, 'Arctic climate is now warming rapidly and much larger changes are projected'. An embedded link then opened a custom web page, with brief explanatory text, along with an ACIA graphic illustrating the rise in Arctic temperature, global CO2 concentrations, and carbon emissions for the last millennium. The demo continued with an interactive tour of other key findings (Reduced Sea Ice, Changes for Animals, Melting Glaciers, Coastal Erosion, Changes in Vegetation, Melting Permafrost, and others). Each placemark was located somewhat arbitrarily (which may be a concern for some audiences), but the points represented the messages in a geographic sense and enabled a smooth visual tour of the northern latitudes. Each placemark was linked to custom web pages with photos and concise take-home messages. The demo ended with navigation to Colorado, then Boulder, then the middle school that the students attended, all the while speaking to implications as they live their lives locally. The demo piqued the students' curiosity, and in this way better conveyed important messages about the Arctic and climate change. The use of geospatial visualizations for outreach and education appears to be in its infancy, with much potential.

  1. Subliminal Reorientation and Repositioning in Immersive Virtual Environments using Saccadic Suppression.

    PubMed

    Bolte, Benjamin; Lappe, Markus

    2015-04-01

    Virtual reality strives to provide a user with an experience of a simulated world that feels as natural as the real world. Yet, to induce this feeling, sometimes it becomes necessary for technical reasons to deviate from a one-to-one correspondence between the real and the virtual world, and to reorient or reposition the user's viewpoint. Ideally, users should not notice the change of the viewpoint to avoid breaks in perceptual continuity. Saccades, the fast eye movements that we make in order to switch gaze from one object to another, produce a visual discontinuity on the retina, but this is not perceived because the visual system suppresses perception during saccades. As a consequence, our perception fails to detect rotations of the visual scene during saccades. We investigated whether saccadic suppression of image displacement (SSID) can be used in an immersive virtual environment (VE) to unconsciously rotate and translate the observer's viewpoint. To do this, the scene changes have to be precisely time-locked to the saccade onset. We used electrooculography (EOG) for eye movement tracking and assessed the performance of two modified eye movement classification algorithms for the challenging task of online saccade detection that is fast enough for SSID. We investigated the sensitivity of participants to translations (forward/backward) and rotations (in the transverse plane) during trans-saccadic scene changes. We found that participants were unable to detect approximately ±0.5m translations along the line of gaze and ±5° rotations in the transverse plane during saccades with an amplitude of 15°. If the user stands still, our approach exploiting SSID thus provides the means to unconsciously change the user's virtual position and/or orientation. For future research and applications, exploiting SSID has the potential to improve existing redirected walking and change blindness techniques for unlimited navigation through arbitrarily-sized VEs by real walking

  2. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  3. Immersive Virtual Reality Technologies as a New Platform for Science, Scholarship, and Education

    NASA Astrophysics Data System (ADS)

    Djorgovski, Stanislav G.; Hut, P.; McMillan, S.; Knop, R.; Vesperini, E.; Graham, M.; Portegies Zwart, S.; Farr, W.; Mahabal, A.; Donalek, C.; Longo, G.

    2010-01-01

    Immersive virtual reality (VR) and virtual worlds (VWs) are an emerging set of technologies which likely represent the next evolutionary step in the ways we use information technology to interact with the world of information and with other people, the roles now generally fulfilled by the Web and other common Internet applications. Currently, these technologies are mainly accessed through various VWs, e.g., the Second Life (SL), which are general platforms for a broad range of user activities. As an experiment in the utilization of these technologies for science, scholarship, education, and public outreach, we have formed the Meta-Institute for Computational Astrophysics (MICA; http://mica-vw.org), the first professional scientific organization based exclusively in VWs. The goals of MICA are: (1) Exploration, development and promotion of VWs and VR technologies for professional research in astronomy and related fields. (2) Providing and developing novel social networking venues and mechanisms for scientific collaboration and communications, including professional meetings, effective telepresence, etc. (3) Use of VWs and VR technologies for education and public outreach. (4) Exchange of ideas and joint efforts with other scientific disciplines in promoting these goals for science and scholarship in general. To this effect, we have a regular schedule of professional and public outreach events in SL, including technical seminars, workshops, journal club, collaboration meetings, public lectures, etc. We find that these technologies are already remarkably effective as a telepresence platform for scientific and scholarly discussions, meetings, etc. They can offer substantial savings of time and resources, and eliminate a lot of unnecessary travel. They are equally effective as a public outreach platform, reaching a world-wide audience. On the pure research front, we are currently exploring the use of these technologies as a venue for numerical simulations and their

  4. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. PMID:23827333

  5. Pharmacophore modeling, virtual screening and 3D-QSAR studies of 5-tetrahydroquinolinylidine aminoguanidine derivatives as sodium hydrogen exchanger inhibitors.

    PubMed

    Bhatt, Hardik G; Patel, Paresh K

    2012-06-01

    Sodium hydrogen exchanger (SHE) inhibitor is one of the most important targets in treatment of myocardial ischemia. In the course of our research into new types of non-acylguanidine, SHE inhibitory activities of 5-tetrahydroquinolinylidine aminoguanidine derivatives were used to build pharmacophore and 3D-QSAR models. Genetic Algorithm Similarity Program (GASP) was used to derive a 3D pharmacophore model which was used in effective alignment of data set. Eight molecules were selected on the basis of structure diversity to build 10 different pharmacophore models. Model 1 was considered as the best model as it has highest fitness score compared to other nine models. The obtained model contained two acceptor sites, two donor atoms and one hydrophobic region. Pharmacophore modeling was followed by substructure searching and virtual screening. The best CoMFA model, representing steric and electrostatic fields, obtained for 30 training set molecules was statistically significant with cross-validated coefficient (q(2)) of 0.673 and conventional coefficient (r(2)) of 0.988. In addition to steric and electrostatic fields observed in CoMFA, CoMSIA also represents hydrophobic, hydrogen bond donor and hydrogen bond acceptor fields. CoMSIA model was also significant with cross-validated coefficient (q(2)) and conventional coefficient (r(2)) of 0.636 and 0.986, respectively. Both models were validated by an external test set of eight compounds and gave satisfactory prediction (r(pred)(2)) of 0.772 and 0.701 for CoMFA and CoMSIA models, respectively. This pharmacophore based 3D-QSAR approach provides significant insights that can be used to design novel, potent and selective SHE inhibitors. PMID:22546667

  6. Sense of Presence and Atypical Social Judgments in Immersive Virtual Environments : Responses of Adolescents with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wallace, Simon; Parsons, Sarah; Westbury, Alice; White, Katie; White, Kathy; Bailey, Anthony

    2010-01-01

    Immersive virtual environments (IVEs) are potentially powerful educational resources but their application for children with Autism Spectrum Disorder (ASD) is under researched. This study aimed to answer two research questions: (1) Do children with ASD experience IVEs in different ways to typically developing children given their cognitive,…

  7. Designing the Self: The Transformation of the Relational Self-Concept through Social Encounters in a Virtual Immersive Environment

    ERIC Educational Resources Information Center

    Knutzen, K. Brant; Kennedy, David M.

    2012-01-01

    This article describes the findings of a 3-month study on how social encounters mediated by an online Virtual Immersive Environment (VIE) impacted on the relational self-concept of adolescents. The study gathered data from two groups of students as they took an Introduction to Design and Programming class. Students in group 1 undertook course…

  8. Collaborative Science Learning in Three-Dimensional Immersive Virtual Worlds: Pre-Service Teachers' Experiences in Second Life

    ERIC Educational Resources Information Center

    Nussli, Natalie; Oh, Kevin; McCandless, Kevin

    2014-01-01

    The purpose of this mixed methods study was to help pre-service teachers experience and evaluate the potential of Second Life, a three-dimensional immersive virtual environment, for potential integration into their future teaching. By completing collaborative assignments in Second Life, nineteen pre-service general education teachers explored an…

  9. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  10. The development of a virtual 3D model of the renal corpuscle from serial histological sections for E-learning environments.

    PubMed

    Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. PMID:25808044

  11. Visualization and Interpretation in 3D Virtual Reality of Topographic and Geophysical Data from the Chicxulub Impact Crater

    NASA Astrophysics Data System (ADS)

    Rosen, J.; Kinsland, G. L.; Borst, C.

    2011-12-01

    We have assembled Shuttle Radar Topography Mission (SRTM) data (Borst and Kinsland, 2005), gravity data (Bedard, 1977), horizontal gravity gradient data (Hildebrand et al., 1995), magnetic data (Pilkington et al., 2000) and GPS topography data (Borst and Kinsland, 2005) from the Chicxulub Impact Crater buried on the Yucatan Peninsula of Mexico. These data sets are imaged as gridded surfaces and are all georegistered, within an interactive 3D virtual reality (3DVR) visualization and interpretation system created and maintained in the Center for Advanced Computer Studies at the University of Louisiana at Lafayette. We are able to view and interpret the data sets individually or together and to scale and move the data or to move our physical head position so as to achieve the best viewing perspective for interpretation. A feature which is especially valuable for understanding the relationships between the various data sets is our ability to "interlace" the 3D images. "Interlacing" is a technique we have developed whereby the data surfaces are moved along a common axis so that they interpenetrate. This technique leads to rapid and positive identification of spatially corresponding features in the various data sets. We present several images from the 3D system, which demonstrate spatial relationships amongst the features in the data sets. Some of the anomalies in gravity are very nearly coincident with anomalies in the magnetic data as one might suspect if the causal bodies are the same. Other gravity and magnetic anomalies are not spatially coincident indicating different causal bodies. Topographic anomalies display a strong spatial correspondence with many gravity anomalies. In some cases small gravity anomalies and topographic valleys are caused by shallow dissolution within the Tertiary cover along faults or fractures propagated upward from the buried structure. In other cases the sources of the gravity anomalies are in the more deeply buried structure from which

  12. 3D virtual planning in orthognathic surgery and CAD/CAM surgical splints generation in one patient with craniofacial microsomia: a case report

    PubMed Central

    Vale, Francisco; Scherzberg, Jessica; Cavaleiro, João; Sanz, David; Caramelo, Francisco; Maló, Luísa; Marcelino, João Pedro

    2016-01-01

    Objective: In this case report, the feasibility and precision of tridimensional (3D) virtual planning in one patient with craniofacial microsomia is tested using Nemoceph 3D-OS software (Software Nemotec SL, Madrid, Spain) to predict postoperative outcomes on hard tissue and produce CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) surgical splints. Methods: The clinical protocol consists of 3D data acquisition of the craniofacial complex by cone-beam computed tomography (CBCT) and surface scanning of the plaster dental casts. The ''virtual patient'' created underwent virtual surgery and a simulation of postoperative results on hard tissues. Surgical splints were manufactured using CAD/CAM technology in order to transfer the virtual surgical plan to the operating room. Intraoperatively, both CAD/CAM and conventional surgical splints are comparable. A second set of 3D images was obtained after surgery to acquire linear measurements and compare them with measurements obtained when predicting postoperative results virtually. Results: It was found a high similarity between both types of surgical splints with equal fitting on the dental arches. The linear measurements presented some discrepancies between the actual surgical outcomes and the predicted results from the 3D virtual simulation, but caution must be taken in the analysis of these results due to several variables. Conclusions: The reported case confirms the clinical feasibility of the described computer-assisted orthognathic surgical protocol. Further progress in the development of technologies for 3D image acquisition and improvements on software programs to simulate postoperative changes on soft tissue are required. PMID:27007767

  13. 3D Virtual Reality Applied in Tectonic Geomorphic Study of the Gombori Range of Greater Caucasus Mountains

    NASA Astrophysics Data System (ADS)

    Sukhishvili, Lasha; Javakhishvili, Zurab

    2016-04-01

    Gombori Range represents the southern part of the young Greater Caucasus Mountains and stretches from NW to SE. The range separates Alazani and Iori basins within the eastern Georgian province of Kakheti. The active phase of Caucasian orogeny started in the Pliocene, but according to alluvial sediments of Gombori range (mapped in the Soviet geologic map), we observe its uplift process to be Quaternary event. The highest peak of the Gombori range has an absolute elevation of 1991 m, while its neighboring Alazani valley gains only 400 m. We assume the range has a very fast uplift rate and it could trigger streams flow direction course reverse in Quaternary. To check this preliminary assumptions we are going to use a tectonic and fluvial geomorphic and stratigraphic approaches including paleocurrent analyses and various affordable absolute dating techniques to detect the evidence of river course reverses and date them. For these purposes we have selected river Turdo outcrop. The river itself flows northwards from the Gombori range and nearby region`s main city of Telavi generates 30-40 m high continuous outcrop along 1 km section. Turdo outcrop has very steep walls and requires special climbing skills to work on it. The goal of this particularly study is to avoid time and resource consuming ground survey process of this steep, high and wide outcrop and test 3D aerial and ground base photogrammetric modelling and analyzing approaches in initial stage of the tectonic geomorphic study. Using this type of remote sensing and virtual lab analyses of 3D outcrop model, we roughly delineated stratigraphic layers, selected exact locations for applying various research techniques and planned safe and suitable climbing routes for getting to the investigation sites.

  14. A new dynamic 3D virtual methodology for teaching the mechanics of atrial septation as seen in the human heart.

    PubMed

    Schleich, Jean-Marc; Dillenseger, Jean-Louis; Houyel, Lucile; Almange, Claude; Anderson, Robert H

    2009-01-01

    Learning embryology remains difficult, since it requires understanding of many complex phenomena. The temporal evolution of developmental events has classically been illustrated using cartoons, which create difficulty in linking spatial and temporal aspects, such correlation being the keystone of descriptive embryology. We synthesized the bibliographic data from recent studies of atrial septal development. On the basis of this synthesis, consensus on the stages of atrial septation as seen in the human heart has been reached by a group of experts in cardiac embryology and pediatric cardiology. This has permitted the preparation of three-dimensional (3D) computer graphic objects for the anatomical components involved in the different stages of normal human atrial septation. We have provided a virtual guide to the process of normal atrial septation, the animation providing an appreciation of the temporal and morphologic events necessary to separate the systemic and pulmonary venous returns. We have shown that our animations of normal human atrial septation increase significantly the teaching of the complex developmental processes involved, and provide a new dynamic for the process of learning. PMID:19363807

  15. Improving the Sequential Time Perception of Teenagers with Mild to Moderate Mental Retardation with 3D Immersive Virtual Reality (IVR)

    ERIC Educational Resources Information Center

    Passig, David

    2009-01-01

    Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…

  16. Incorporating immersive virtual environments in health promotion campaigns: a construal level theory approach.

    PubMed

    Ahn, Sun Joo Grace

    2015-01-01

    In immersive virtual environments (IVEs), users may observe negative consequences of a risky health behavior in a personally involving way via digital simulations. In the context of an ongoing health promotion campaign, IVEs coupled with pamphlets are proposed as a novel messaging strategy to heighten personal relevance and involvement with the issue of soft-drink consumption and obesity, as well as perceptions that the risk is proximal and imminent. The framework of construal level theory guided the design of a 2 (tailoring: other vs. self) × 2 (medium: pamphlet only vs. pamphlet with IVEs) between-subjects experiment to test the efficacy in reducing the consumption of soft drinks over 1 week. Immediately following exposure, tailoring the message to the self (vs. other) seemed to be effective in reducing intentions to consume soft drinks. The effect of tailoring dissipated after 1 week, and measures of actual soft-drink consumption 1 week following experimental treatments demonstrated that coupling IVEs with the pamphlet was more effective. Behavioral intention was a significant predictor of actual behavior, but underlying mechanisms driving intentions and actual behavior were distinct. Results prescribed a messaging strategy that incorporates both tailoring and coupling IVEs with traditional media to increase behavioral changes over time. PMID:24991725

  17. The effect of visual and interaction fidelity on spatial cognition in immersive virtual environments.

    PubMed

    Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew

    2006-01-01

    Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images. PMID:16640253

  18. A Learner-Centered Approach for Training Science Teachers through Virtual Reality and 3D Visualization Technologies: Practical Experience for Sharing

    ERIC Educational Resources Information Center

    Yeung, Yau-Yuen

    2004-01-01

    This paper presentation will report on how some science educators at the Science Department of The Hong Kong Institute of Education have successfully employed an array of innovative learning media such as three-dimensional (3D) and virtual reality (VR) technologies to create seven sets of resource kits, most of which are being placed on the…

  19. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    ERIC Educational Resources Information Center

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  20. Bystander responses to a violent incident in an immersive virtual environment.

    PubMed

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation. PMID:23300991

  1. Effects of virtual reality immersion and audiovisual distraction techniques for patients with pruritus

    PubMed Central

    Leibovici, Vera; Magora, Florella; Cohen, Sarale; Ingber, Arieh

    2009-01-01

    BACKGROUND: Virtual reality immersion (VRI), an advanced computer-generated technique, decreased subjective reports of pain in experimental and procedural medical therapies. Furthermore, VRI significantly reduced pain-related brain activity as measured by functional magnetic resonance imaging. Resemblance between anatomical and neuroendocrine pathways of pain and pruritus may prove VRI to be a suitable adjunct for basic and clinical studies of the complex aspects of pruritus. OBJECTIVES: To compare effects of VRI with audiovisual distraction (AVD) techniques for attenuation of pruritus in patients with atopic dermatitis and psoriasis vulgaris. METHODS: Twenty-four patients suffering from chronic pruritus – 16 due to atopic dermatitis and eight due to psoriasis vulgaris – were randomly assigned to play an interactive computer game using a special visor or a computer screen. Pruritus intensity was self-rated before, during and 10 min after exposure using a visual analogue scale ranging from 0 to 10. The interviewer rated observed scratching on a three-point scale during each distraction program. RESULTS: Student’s t tests were significant for reduction of pruritus intensity before and during VRI and AVD (P=0.0002 and P=0.01, respectively) and were significant only between ratings before and after VRI (P=0.017). Scratching was mostly absent or mild during both programs. CONCLUSIONS: VRI and AVD techniques demonstrated the ability to diminish itching sensations temporarily. Further studies on the immediate and late effects of interactive computer distraction techniques to interrupt itching episodes will open potential paths for future pruritus research. PMID:19714267

  2. Development of an immersive virtual reality head-mounted display with high performance.

    PubMed

    Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua

    2016-09-01

    To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6  pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance. PMID:27607272

  3. Bystander Responses to a Violent Incident in an Immersive Virtual Environment

    PubMed Central

    Slater, Mel; Rovira, Aitor; Southern, Richard; Swapp, David; Zhang, Jian J.; Campbell, Claire; Levine, Mark

    2013-01-01

    Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation. PMID:23300991

  4. Feasibility of an Immersive Virtual Reality Intervention for Hospitalized Patients: An Observational Cohort Study

    PubMed Central

    2016-01-01

    Background Virtual reality (VR) offers immersive, realistic, three-dimensional experiences that “transport” users to novel environments. Because VR is effective for acute pain and anxiety, it may have benefits for hospitalized patients; however, there are few reports using VR in this setting. Objective The aim was to evaluate the acceptability and feasibility of VR in a diverse cohort of hospitalized patients. Methods We assessed the acceptability and feasibility of VR in a cohort of patients admitted to an inpatient hospitalist service over a 4-month period. We excluded patients with motion sickness, stroke, seizure, dementia, nausea, and in isolation. Eligible patients viewed VR experiences (eg, ocean exploration; Cirque du Soleil; tour of Iceland) with Samsung Gear VR goggles. We then conducted semistructured patient interview and performed statistical testing to compare patients willing versus unwilling to use VR. Results We evaluated 510 patients; 423 were excluded and 57 refused to participate, leaving 30 participants. Patients willing versus unwilling to use VR were younger (mean 49.1, SD 17.4 years vs mean 60.2, SD 17.7 years; P=.01); there were no differences by sex, race, or ethnicity. Among users, most reported a positive experience and indicated that VR could improve pain and anxiety, although many felt the goggles were uncomfortable. Conclusions Most inpatient users of VR described the experience as pleasant and capable of reducing pain and anxiety. However, few hospitalized patients in this “real-world” series were both eligible and willing to use VR. Consistent with the “digital divide” for emerging technologies, younger patients were more willing to participate. Future research should evaluate the impact of VR on clinical and resource outcomes. ClinicalTrial Clinicaltrials.gov NCT02456987; https://clinicaltrials.gov/ct2/show/NCT02456987 (Archived by WebCite at http://www.webcitation.org/6iFIMRNh3) PMID:27349654

  5. Spatial awareness in immersive virtual environments revealed in open-loop walking

    NASA Astrophysics Data System (ADS)

    Turano, Kathleen A.; Chaudhury, Sidhartha

    2005-03-01

    People are able to walk without vision to previously viewed targets in the real world. This ability to update one"s position in space has been attributed to a path integration system that uses internally generated self-motion signals together with the perceived object-to-self distance of the target. In a previous study using an immersive virtual environment (VE), we found that many subjects were unable to walk without vision to a previously viewed target located 4 m away. Their walking paths were influenced by the room structure that varied trial to trial. In this study we investigated whether the phenomenon is specific to a VE by testing subjects in a real world and a VE. The real world was viewed with field restricting goggles and via cameras using the same head-mounted display as in the VE. The results showed that only in the VE were walking paths influenced by the room structure. Women were more affected than men, and the effect decreased over trials and after subjects performed the task in the real world. The results also showed that a brief (<0.5 s) exposure to the visual scene during self-motion was sufficient to reduce the influence of the room structure on walking paths. The results are consistent with the idea that without visual experience within the VE, the path integration system is unable to effectively update one"s spatial position. As a result, people rely on other cues to define their position in space. Women, unlike men, choose to use visual cues about environmental structure to reorient.

  6. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  7. The use of a low-cost visible light 3D scanner to create virtual reality environment models of actors and objects

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.

  8. A Second Chance at Health: How a 3D Virtual World Can Improve Health Self-Efficacy for Weight Loss Management Among Adults.

    PubMed

    Behm-Morawitz, Elizabeth; Lewallen, Jennifer; Choi, Grace

    2016-02-01

    Health self-efficacy, or the beliefs in one's capabilities to perform health behaviors, is a significant factor in eliciting health behavior change, such as weight loss. Research has demonstrated that virtual embodiment has the potential to alter one's psychology and physicality, particularly in health contexts; however, little is known about the impacts embodiment in a virtual world has on health self-efficacy. The present research is a randomized controlled trial (N = 90) examining the effectiveness of virtual embodiment and play in a social virtual world (Second Life [SL]) for increasing health self-efficacy (exercise and nutrition efficacy) among overweight adults. Participants were randomly assigned to a 3D social virtual world (avatar virtual interaction experimental condition), 2D social networking site (no avatar virtual interaction control condition), or no intervention (no virtual interaction control condition). The findings of this study provide initial evidence for the use of SL to improve exercise efficacy and to support weight loss. Results also suggest that individuals who have higher self-presence with their avatar reap more benefits. Finally, quantitative findings are triangulated with qualitative data to increase confidence in the results and provide richer insight into the perceived effectiveness and limitations of SL for meeting weight loss goals. Themes resulting from the qualitative analysis indicate that participation in SL can improve motivation and efficacy to try new physical activities; however, individuals who have a dislike for video games may not be benefitted by avatar-based virtual interventions. Implications for research on the transformative potential of virtual embodiment and self-presence in general are discussed. PMID:26882324

  9. Coming down to Earth: Helping Teachers Use 3D Virtual Worlds in Across-Spaces Learning Situations

    ERIC Educational Resources Information Center

    Muñoz-Cristóbal, Juan A.; Prieto, Luis P.; Asensio-Pérez, Juan I.; Martínez-Monés, Alejandra; Jorrín-Abellán, Iván M.; Dimitriadis, Yannis

    2015-01-01

    Different approaches have explored how to provide seamless learning across multiple ICT-enabled physical and virtual spaces, including three-dimensional virtual worlds (3DVW). However, these approaches present limitations that may reduce their acceptance in authentic educational practice: The difficulties of authoring and sharing teacher-created…

  10. Analysis for Clinical Effect of Virtual Windowing and Poking Reduction Treatment for Schatzker III Tibial Plateau Fracture Based on 3D CT Data

    PubMed Central

    Zhang, Huafeng; Li, Zhijun; Xu, Qian; Zhang, Yuan; Xu, Ke; Ma, Xinlong

    2015-01-01

    Objective. To explore the applications of preoperative planning and virtual surgery including surgical windowing and elevating reduction and to determine the clinical effects of this technology on the treatment of Schatzker type III tibial plateau fractures. Methods. 32 patients with Schatzker type III tibial plateau fractures were randomised upon their admission to the hospital using a sealed envelope method. Fourteen were treated with preoperative virtual design and assisted operation (virtual group) and 18 with direct open reduction and internal fixation (control group). Results. All patients achieved primary incision healing. Compared with control group, virtual groups showed significant advantages in operative time, incision length, and blood loss (P < 0.001). The virtual surgery was consistent with the actual surgery. Conclusion. The virtual group was better than control group in the treatment of tibial plateau fractures of Schatzker type III, due to shorter operative time, smaller incision length, and lower blood loss. The reconstructed 3D fracture model could be used to preoperatively determine the surgical windowing and elevating reduction method and simulate the operation for Schatzker type III tibial plateau fractures. PMID:25767804

  11. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients.

    PubMed

    Lledó, Luis D; Díez, Jorge A; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J; Sabater-Navarro, José M; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  12. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates

  13. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  14. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  15. The Effect of the Use of the 3-D Multi-User Virtual Environment "Second Life" on Student Motivation and Language Proficiency in Courses of Spanish as a Foreign Language

    ERIC Educational Resources Information Center

    Pares-Toral, Maria T.

    2013-01-01

    The ever increasing popularity of virtual worlds, also known as 3-D multi-user virtual environments (MUVEs) or simply virtual worlds provides language instructors with a new tool they can exploit in their courses. For now, "Second Life" is one of the most popular MUVEs used for teaching and learning, and although "Second Life"…

  16. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  17. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  18. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods. PMID:25570522

  19. Literacy in Virtual Worlds

    ERIC Educational Resources Information Center

    Merchant, Guy

    2009-01-01

    Introducing new digital literacies into classroom settings is an important and challenging task, and one that is encouraged by both policy-makers and educators. This paper draws on a case study of a 3D virtual world which aimed to engage and motivate primary school children in an immersive and literacy-rich on-line experience. Planning decisions,…

  20. The Viability of Virtual Worlds in Higher Education: Can Creativity Thrive outside the Traditional Classroom Environment?

    ERIC Educational Resources Information Center

    Bradford, Linda M.

    2012-01-01

    In spite of the growing popularity of virtual worlds for gaming, recreation, and education, few studies have explored the efficacy of 3D immersive virtual worlds in post-secondary instruction; even fewer discuss the ability of virtual worlds to help young adults develop creative thinking. This study investigated the effect of virtual world…

  1. Addition of 3D scene attributes to a virtual landscape of Al-Madinah Al-Munwwarah in Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alshammari, Saleh; Hayes, Ladson W.

    2003-03-01

    A 3-dimensional virtual landscape has been produced of Al-Madinah Al-Munwwarah in Saudi Arabia. A Triangular Irregular Network (TIN) interpolation method has been used to create a digital elevation model (DEM) from digital topographic maps at 1:1000 scale. High resolution aerial photography has been merged with satellite imagery to drape over the DEM. The resultant DEM, and fused overlay images, has been imported into Internet Space Builder software in order to add several attributes to the scene and to create an interactive virtual reality modelling language (VRML) model to support walk-throughs of the scene.

  2. Changelings and Shape Shifters? Identity Play and Pedagogical Positioning of Staff in Immersive Virtual Worlds

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi

    2010-01-01

    This paper presents a study that used narrative inquiry to explore staff experiences of learning and teaching in immersive worlds. The findings introduced issues relating to identity play, the relationship between pedagogy and play and the ways in which learning, play and fun were managed (or not). At the same time there was a sense of imposed or…

  3. Enabling immersive simulation.

    SciTech Connect

    McCoy, Josh; Mateas, Michael; Hart, Derek H.; Whetzel, Jonathan; Basilico, Justin Derrick; Glickman, Matthew R.; Abbott, Robert G.

    2009-02-01

    The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

  4. Transforming Clinical Imaging and 3D Data for Virtual Reality Learning Objects: HTML5 and Mobile Devices Implementation

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Nieder, Gary L.

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…

  5. Implementing Advanced Characteristics of X3D Collaborative Virtual Environments for Supporting e-Learning: The Case of EVE Platform

    ERIC Educational Resources Information Center

    Bouras, Christos; Triglianos, Vasileios; Tsiatsos, Thrasyvoulos

    2014-01-01

    Three dimensional Collaborative Virtual Environments are a powerful form of collaborative telecommunication applications, enabling the users to share a common three-dimensional space and interact with each other as well as with the environment surrounding them, in order to collaboratively solve problems or aid learning processes. Such an…

  6. Caring in the Dynamics of Design and Languaging: Exploring Second Language Learning in 3D Virtual Spaces

    ERIC Educational Resources Information Center

    Zheng, Dongping

    2012-01-01

    This study provides concrete evidence of ecological, dialogical views of languaging within the dynamics of coordination and cooperation in a virtual world. Beginning level second language learners of Chinese engaged in cooperative activities designed to provide them opportunities to refine linguistic actions by way of caring for others, for the…

  7. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  8. Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion

    PubMed Central

    Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel

    2012-01-01

    Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891

  9. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    ERIC Educational Resources Information Center

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  10. Building Analysis for Urban Energy Planning Using Key Indicators on Virtual 3d City Models - the Energy Atlas of Berlin

    NASA Astrophysics Data System (ADS)

    Krüger, A.; Kolbe, T. H.

    2012-07-01

    In the context of increasing greenhouse gas emission and global demographic change with the simultaneous trend to urbanization, it is a big challenge for cities around the world to perform modifications in energy supply chain and building characteristics resulting in reduced energy consumption and carbon dioxide mitigation. Sound knowledge of energy resource demand and supply including its spatial distribution within urban areas is of great importance for planning strategies addressing greater energy efficiency. The understanding of the city as a complex energy system affects several areas of the urban living, e.g. energy supply, urban texture, human lifestyle, and climate protection. With the growing availability of 3D city models around the world based on the standard language and format CityGML, energy system modelling, analysis and simulation can be incorporated into these models. Both domains will profit from that interaction by bringing together official and accurate building models including building geometries, semantics and locations forming a realistic image of the urban structure with systemic energy simulation models. A holistic view on the impacts of energy planning scenarios can be modelled and analyzed including side effects on urban texture and human lifestyle. This paper focuses on the identification, classification, and integration of energy-related key indicators of buildings and neighbourhoods within 3D building models. Consequent application of 3D city models conforming to CityGML serves the purpose of deriving indicators for this topic. These will be set into the context of urban energy planning within the Energy Atlas Berlin. The generation of indicator objects covering the indicator values and related processing information will be presented on the sample scenario estimation of heating energy consumption in buildings and neighbourhoods. In their entirety the key indicators will form an adequate image of the local energy situation for

  11. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously

  12. Comparing "pick and place" task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting.

    PubMed

    Khademi, Maryam; Hondori, Hossein Mousavi; Dodakian, Lucy; Cramer, Steve; Lopes, Cristina V

    2013-01-01

    Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR. PMID:24110762

  13. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    NASA Technical Reports Server (NTRS)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a

  14. In silico exploration of c-KIT inhibitors by pharmaco-informatics methodology: pharmacophore modeling, 3D QSAR, docking studies, and virtual screening.

    PubMed

    Chaudhari, Prashant; Bari, Sanjay

    2016-02-01

    c-KIT is a component of the platelet-derived growth factor receptor family, classified as type-III receptor tyrosine kinase. c-KIT has been reported to be involved in, small cell lung cancer, other malignant human cancers, and inflammatory and autoimmune diseases associated with mast cells. Available c-KIT inhibitors suffer from tribulations of growing resistance or cardiac toxicity. A combined in silico pharmacophore and structure-based virtual screening was performed to identify novel potential c-KIT inhibitors. In the present study, five molecules from the ZINC database were retrieved as new potential c-KIT inhibitors, using Schrödinger's Maestro 9.0 molecular modeling suite. An atom-featured 3D QSAR model was built using previously reported c-KIT inhibitors containing the indolin-2-one scaffold. The developed 3D QSAR model ADHRR.24 was found to be significant (R2 = 0.9378, Q2 = 0.7832) and instituted to be sufficiently robust with good predictive accuracy, as confirmed through external validation approaches, Y-randomization and GH approach [GH score 0.84 and Enrichment factor (E) 4.964]. The present QSAR model was further validated for the OECD principle 3, in that the applicability domain was calculated using a "standardization approach." Molecular docking of the QSAR dataset molecules and final ZINC hits were performed on the c-KIT receptor (PDB ID: 3G0E). Docking interactions were in agreement with the developed 3D QSAR model. Model ADHRR.24 was explored for ligand-based virtual screening followed by in silico ADME prediction studies. Five molecules from the ZINC database were obtained as potential c-KIT inhibitors with high in -silico predicted activity and strong key binding interactions with the c-KIT receptor. PMID:26416560

  15. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  16. One concept, three implementations of 3D pharmacophore-based virtual screening: distinct coverage of chemical search space.

    PubMed

    Spitzer, Gudrun M; Heiss, Mathias; Mangold, Martina; Markt, Patrick; Kirchmair, Johannes; Wolber, Gerhard; Liedl, Klaus R

    2010-07-26

    Feature-based pharmacophore modeling is a well-established concept to support early stage drug discovery, where large virtual databases are filtered for potential drug candidates. The concept is implemented in popular molecular modeling software, including Catalyst, Phase, and MOE. With these software tools we performed a comparative virtual screening campaign on HSP90 and FXIa, taken from the 'maximum unbiased validation' data set. Despite the straightforward concept that pharmacophores are based on, we observed an unexpectedly high degree of variation among the hit lists obtained. By harmonizing the pharmacophore feature definitions of the investigated approaches, the exclusion volume sphere settings, and the screening parameters, we have derived a rationale for the observed differences, providing insight on the strengths and weaknesses of these algorithms. Application of more than one of these software tools in parallel will result in a widened coverage of chemical space. This is not only rooted in the dissimilarity of feature definitions but also in different algorithmic search strategies. PMID:20583761

  17. Sino-VirtualMoon: A 3D web platform using Chang’E-1 data for collaborative research

    NASA Astrophysics Data System (ADS)

    Chen, Min; Lin, Hui; Wen, Yongning; He, Li; Hu, Mingyuan

    2012-05-01

    The successful launch of the Chinese Chang’E-1 satellite created a valuable opportunity for lunar research, and represented China’s remarkable leap in deep space exploration. With the observed data acquired by Chang’E-1 satellite, a web platform was developed aims at providing an open research workspace for experts to conduct collaborative scientific research on the Moon. Excepting for supporting 3D visualization, the platform also provides collaborative tools for the basic geospatial analysis of the Moon, and supports collaborative simulation about the dynamic formation of lunar impact craters caused by the collision of meteors (or small asteroids). Based on this platform, related multidisciplinary experts can contribute their domain knowledge conveniently for collaborative scientific research of the Moon.

  18. Using Immersive Virtual Reality to Reduce Work Accidents in Developing Countries.

    PubMed

    Nedel, Luciana; de Souza, Vinicius Costa; Menin, Aline; Sebben, Lucia; Oliveira, Jackson; Faria, Frederico; Maciel, Anderson

    2016-01-01

    Thousands of people die or are injured in work accidents every year. Although the lack of safety equipment is one of the causes, especially in developing countries, behavioral issues caused by psychosocial factors are also to blame. This article introduces the use of immersive VR simulators to preventively reduce accidents in the workplace by detecting behavioral patterns that may lead to an increased predisposition to risk exposure. The system simulates day-to-day situations, analyzes user reactions, and classifies the behaviors according to four psychosocial groups. The results of a user study support the effectiveness of this approach. PMID:26915116

  19. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  20. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    NASA Astrophysics Data System (ADS)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2004-12-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  1. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  2. Real-time recording and classification of eye movements in an immersive virtual environment.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  3. Visual Perspectives within Educational Computer Games: Effects on Presence and Flow within Virtual Immersive Learning Environments

    ERIC Educational Resources Information Center

    Scoresby, Jon; Shelton, Brett E.

    2011-01-01

    The mis-categorizing of cognitive states involved in learning within virtual environments has complicated instructional technology research. Further, most educational computer game research does not account for how learning activity is influenced by factors of game content and differences in viewing perspectives. This study is a qualitative…

  4. The Development and Evaluation of a Virtual Radiotherapy Treatment Machine Using an Immersive Visualisation Environment

    ERIC Educational Resources Information Center

    Bridge, P.; Appleyard, R. M.; Ward, J. W.; Philips, R.; Beavis, A. W.

    2007-01-01

    Due to the lengthy learning process associated with complicated clinical techniques, undergraduate radiotherapy students can struggle to access sufficient time or patients to gain the level of expertise they require. By developing a hybrid virtual environment with real controls, it was hoped that group learning of these techniques could take place…

  5. Immersive Virtual Reality in the Psychology Classroom: What Purpose Could it Serve?

    ERIC Educational Resources Information Center

    Coxon, Matthew

    2013-01-01

    Virtual reality is by no means a new technology, yet it is increasingly being used, to different degrees, in education, training, rehabilitation, therapy, and home entertainment. Although the exact reasons for this shift are not the subject of this short opinion piece, it is possible to speculate that decreased costs, and increased performance, of…

  6. The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality

    PubMed Central

    Leyrer, Markus; Linkenauger, Sally A.; Bülthoff, Heinrich H.; Mohler, Betty J.

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  7. The importance of postural cues for determining eye height in immersive virtual reality.

    PubMed

    Leyrer, Markus; Linkenauger, Sally A; Bülthoff, Heinrich H; Mohler, Betty J

    2015-01-01

    In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. PMID:25993274

  8. On the Potential for Using Immersive Virtual Environments to Support Laboratory Experiment Contextualisation

    ERIC Educational Resources Information Center

    Machet, Tania; Lowe, David; Gutl, Christian

    2012-01-01

    This paper explores the hypothesis that embedding a laboratory activity into a virtual environment can provide a richer experimental context and hence improve the understanding of the relationship between a theoretical model and the real world, particularly in terms of the model's strengths and weaknesses. While an identified learning objective of…

  9. A Virtual Walk through London: Culture Learning through a Cultural Immersion Experience

    ERIC Educational Resources Information Center

    Shih, Ya-Chun

    2015-01-01

    Integrating Google Street View into a three-dimensional virtual environment in which users control personal avatars provides these said users with access to an innovative, interactive, and real-world context for communication and culture learning. We have selected London, a city famous for its rich historical, architectural, and artistic heritage,…

  10. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  11. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  13. New Desktop Virtual Reality Technology in Technical Education

    ERIC Educational Resources Information Center

    Ausburn, Lynna J.; Ausburn, Floyd B.

    2008-01-01

    Virtual reality (VR) that immerses users in a 3D environment through use of headwear, body suits, and data gloves has demonstrated effectiveness in technical and professional education. Immersive VR is highly engaging and appealing to technically skilled young Net Generation learners. However, technical difficulty and very high costs have kept…

  14. Game engines and immersive displays

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  15. Hypnosis delivered through immersive virtual reality for burn pain: A clinical case series.

    PubMed

    Patterson, David R; Wiechman, Shelley A; Jensen, Mark; Sharar, Sam R

    2006-04-01

    This study is the first to use virtual-reality technology on a series of clinical patients to make hypnotic analgesia less effortful for patients and to increase the efficiency of hypnosis by eliminating the need for the presence of a trained clinician. This technologically based hypnotic induction was used to deliver hypnotic analgesia to burn-injury patients undergoing painful wound-care procedures. Pre- and postprocedure measures were collected on 13 patients with burn injuries across 3 days. In an uncontrolled series of cases, there was a decrease in reported pain and anxiety, and the need for opioid medication was cut in half. The results support additional research on the utility and efficacy of hypnotic analgesia provided by virtual reality hypnosis. PMID:16581687

  16. Drug Design for CNS Diseases: Polypharmacological Profiling of Compounds Using Cheminformatic, 3D-QSAR and Virtual Screening Methodologies

    PubMed Central

    Nikolic, Katarina; Mavridis, Lazaros; Djikic, Teodora; Vucicevic, Jelica; Agbaba, Danica; Yelekci, Kemal; Mitchell, John B. O.

    2016-01-01

    HIGHLIGHTS Many CNS targets are being explored for multi-target drug designNew databases and cheminformatic methods enable prediction of primary pharmaceutical target and off-targets of compoundsQSAR, virtual screening and docking methods increase the potential of rational drug design The diverse cerebral mechanisms implicated in Central Nervous System (CNS) diseases together with the heterogeneous and overlapping nature of phenotypes indicated that multitarget strategies may be appropriate for the improved treatment of complex brain diseases. Understanding how the neurotransmitter systems interact is also important in optimizing therapeutic strategies. Pharmacological intervention on one target will often influence another one, such as the well-established serotonin-dopamine interaction or the dopamine-glutamate interaction. It is now accepted that drug action can involve plural targets and that polypharmacological interaction with multiple targets, to address disease in more subtle and effective ways, is a key concept for development of novel drug candidates against complex CNS diseases. A multi-target therapeutic strategy for Alzheimer‘s disease resulted in the development of very effective Multi-Target Designed Ligands (MTDL) that act on both the cholinergic and monoaminergic systems, and also retard the progression of neurodegeneration by inhibiting amyloid aggregation. Many compounds already in databases have been investigated as ligands for multiple targets in drug-discovery programs. A probabilistic method, the Parzen-Rosenblatt Window approach, was used to build a “predictor” model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. Based on all these findings, it is concluded that multipotent

  17. Drug Design for CNS Diseases: Polypharmacological Profiling of Compounds Using Cheminformatic, 3D-QSAR and Virtual Screening Methodologies.

    PubMed

    Nikolic, Katarina; Mavridis, Lazaros; Djikic, Teodora; Vucicevic, Jelica; Agbaba, Danica; Yelekci, Kemal; Mitchell, John B O

    2016-01-01

    HIGHLIGHTS Many CNS targets are being explored for multi-target drug designNew databases and cheminformatic methods enable prediction of primary pharmaceutical target and off-targets of compoundsQSAR, virtual screening and docking methods increase the potential of rational drug design The diverse cerebral mechanisms implicated in Central Nervous System (CNS) diseases together with the heterogeneous and overlapping nature of phenotypes indicated that multitarget strategies may be appropriate for the improved treatment of complex brain diseases. Understanding how the neurotransmitter systems interact is also important in optimizing therapeutic strategies. Pharmacological intervention on one target will often influence another one, such as the well-established serotonin-dopamine interaction or the dopamine-glutamate interaction. It is now accepted that drug action can involve plural targets and that polypharmacological interaction with multiple targets, to address disease in more subtle and effective ways, is a key concept for development of novel drug candidates against complex CNS diseases. A multi-target therapeutic strategy for Alzheimer's disease resulted in the development of very effective Multi-Target Designed Ligands (MTDL) that act on both the cholinergic and monoaminergic systems, and also retard the progression of neurodegeneration by inhibiting amyloid aggregation. Many compounds already in databases have been investigated as ligands for multiple targets in drug-discovery programs. A probabilistic method, the Parzen-Rosenblatt Window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. Based on all these findings, it is concluded that multipotent ligands

  18. 3D surface coordinate inspection of formed sheet material parts using optical measurement systems and virtual distortion compensation

    NASA Astrophysics Data System (ADS)

    Weckenmann, Albert A.; Gall, P.; Gabbia, A.

    2005-02-01

    Modern forming technology allows the production of highly sophisticated free form sheet material components, affording great flexibility to the design and manufacturing processes across a wide range of industries. This increased design and manufacturing potential places an ever growing demand on the accompanying inspection metrology. As a consequence of their surface shape, these parts underlie a reversible geometrical deformation caused by variations of the material and the manufacturing process, as well as by gravity. This distortion is removed during the assembly process, usually performed in automated robotic processes. For this reason, the part's tolerated parameters have to be inspected in a defined state, simulating the assembly process' boundary conditions. Thus, the inspection process chain consists of six steps: picking the workpiece up, manual fixation of the workpiece, tactile measurement of the surface's coordinates using a defined measurement strategy, manual removal of the fixation and removal of the workpiece from the inspection area. These steps are both laborious and time consuming (for example, the inspection of a car door can take up to a working day to complete). Using optical measuring systems and virtual distortion compensation, this process chain can be dramatically shortened. Optical measuring systems provide as a measurement result a point cloud representing a sample of all nearest surfaces in the measuring range containing the measurand. From this data, a surface model of the measurand can be determined, independent of its position in the measuring range. For thin sheet material parts an approximating finite element model can be deduced from such a surface model. By means of pattern recognition, assembly relevant features of the measurand can be identified and located on this model. Together with the boundary conditions given by the assembly process, the shape of the surface in its assembled state can be calculated using the finite

  19. A semi-immersive virtual reality incremental swing balance task activates prefrontal cortex: a functional near-infrared spectroscopy study.

    PubMed

    Basso Moro, Sara; Bisconti, Silvia; Muthalib, Makii; Spezialetti, Matteo; Cutini, Simone; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2014-01-15

    Previous functional near-infrared spectroscopy (fNIRS) studies indicated that the prefrontal cortex (PFC) is involved in the maintenance of the postural balance after external perturbations. So far, no studies have been conducted to investigate the PFC hemodynamic response to virtual reality (VR) tasks that could be adopted in the field of functional neurorehabilitation. The aim of this fNIRS study was to assess PFC oxygenation response during an incremental and a control swing balance task (ISBT and CSBT, respectively) in a semi-immersive VR environment driven by a depth-sensing camera. It was hypothesized that: i) the PFC would be bilaterally activated in response to the increase of the ISBT difficulty, as this cortical region is involved in the allocation of attentional resources to maintain postural control; and ii) the PFC activation would be greater in the right than in the left hemisphere considering its dominance for visual control of body balance. To verify these hypotheses, 16 healthy male subjects were requested to stand barefoot while watching a 3 dimensional virtual representation of themselves projected onto a screen. They were asked to maintain their equilibrium on a virtual blue swing board susceptible to external destabilizing perturbations (i.e., randomizing the forward-backward direction of the impressed pulse force) during a 3-min ISBT (performed at four levels of difficulty) or during a 3-min CSBT (performed constantly at the lowest level of difficulty of the ISBT). The center of mass (COM), at each frame, was calculated and projected on the floor. When the subjects were unable to maintain the COM over the board, this became red (error). After each error, the time required to bring back the COM on the board was calculated (returning time). An eight-channel continuous wave fNIRS system was employed for measuring oxygenation changes (oxygenated-hemoglobin, O2Hb; deoxygenated-hemoglobin, HHb) related to the PFC activation (Brodmann Areas 10, 11

  20. Transforming clinical imaging and 3D data for virtual reality learning objects: HTML5 and mobile devices implementation.

    PubMed

    Trelease, Robert B; Nieder, Gary L

    2013-01-01

    Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android tablets. This article describes complementary methods for creating comparable, multiplatform VR learning objects in the new HTML5 standard format, circumventing platform-specific limitations imposed by the QuickTime VR multimedia file format. Multiple types or "dimensions" of anatomical information can be embedded in such learning objects, supporting different kinds of online learning applications, including interactive atlases, examination questions, and complex, multi-structure presentations. Such HTML5 VR learning objects are usable on new mobile devices that do not support QuickTime VR, as well as on personal computers. Furthermore, HTML5 VR learning objects can be embedded in "ebook" document files, supporting the development of new types of electronic textbooks on mobile devices that are increasingly popular and self-adopted for mobile learning. PMID:23212750

  1. Reorienting in Virtual 3D Environments: Do Adult Humans Use Principal Axes, Medial Axes or Local Geometry?

    PubMed Central

    Ambosta, Althea H.; Reichert, James F.; Kelly, Debbie M.

    2013-01-01

    Studies have shown that animals, including humans, use the geometric properties of environments to orient. It has been proposed that orientation is accomplished primarily by encoding the principal axes (i.e., global geometry) of an environment. However, recent research has shown that animals use local information such as wall length and corner angles as well as local shape parameters (i.e., medial axes) to orient. The goal of the current study was to determine whether adult humans reorient according to global geometry based on principal axes or whether reliance is on local geometry such as wall length and sense information or medial axes. Using a virtual environment task, participants were trained to select a response box located at one of two geometrically identical corners within a featureless rectangular-shaped environment. Participants were subsequently tested in a transformed L-shaped environment that allowed for a dissociation of strategies based on principal axes, medial axes and local geometry. Results showed that participants relied primarily on a medial axes strategy to reorient in the L-shaped test environment. Importantly, the search behaviour of participants could not be explained by a principal axes-based strategy. PMID:24223869

  2. Affordable virtual environments: building a virtual beach for clinical use.

    PubMed

    Sherstyuk, Andrei; Aschwanden, Christoph; Saiki, Stanley

    2005-01-01

    Virtual Reality has been used for clinical application for about 10 years and has proved to be an effective tool for treating various disorders. In this paper, we want to share our experience in building a 3D, motion tracked, immersive VR system for pain treatment and biofeedback research. PMID:15718779

  3. Manifold compositions, music visualization, and scientific sonification in an immersive virtual-reality environment.

    SciTech Connect

    Kaper, H. G.

    1998-01-05

    An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.

  4. A novel orthogonal transmission-virtual grating method and its applications in measuring micro 3-D shape of deformed liquid surface

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwei; Huang, Xianfu; Xie, Huimin

    2013-02-01

    Deformed liquid surface directly involves the surface tension, which can always be used to account for the kinematics of aquatic insects in gas-liquid interface and the light metal floating on the water surface. In this paper a novel method based upon deformed transmission-virtual grating is proposed for determination of deformed liquid surface. By addressing an orthogonal grating (1-5 line/mm) under the transparent water groove and then capturing images from upset of the deformed water surface, a displacement vector of full-field which directly associates the 3-D deformed liquid surface then can be evaluated by processing the recorded deformed fringe pattern in the two directions (x- and y-direction). Theories and equations for the method are thoroughly delivered. Validation test to measure the deformed water surface caused by a Chinese 1-cent coin has been conducted to demonstrate the ability of the developed method. The obtained results show that the method is robust in determination of micro 3-D surface of deformed liquid with a submicron scale resolution and with a wide range application scope.

  5. Modeling and Accuracy Assessment for 3D-VIRTUAL Reconstruction in Cultural Heritage Using Low-Cost Photogrammetry: Surveying of the "santa MARÍA Azogue" Church's Front

    NASA Astrophysics Data System (ADS)

    Robleda Prieto, G.; Pérez Ramos, A.

    2015-02-01

    Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.

  6. Body Space in Social Interactions: A Comparison of Reaching and Comfort Distance in Immersive Virtual Reality

    PubMed Central

    Iachini, Tina; Coello, Yann; Frassinetti, Francesca; Ruggiero, Gennaro

    2014-01-01

    Background Do peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance. Methodology Participants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active). Principal Findings Comfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants. Conclusions These findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space. PMID:25405344

  7. On the potential for using immersive virtual environments to support laboratory experiment contextualisation

    NASA Astrophysics Data System (ADS)

    Machet, Tania; Lowe, David; Gütl, Christian

    2012-12-01

    This paper explores the hypothesis that embedding a laboratory activity into a virtual environment can provide a richer experimental context and hence improve the understanding of the relationship between a theoretical model and the real world, particularly in terms of the model's strengths and weaknesses. While an identified learning objective of laboratories is to support the understanding of the relationship between models and reality, the paper illustrates that this understanding is hindered by inherently limited experiments and that there is scope for improvement. Despite the contextualisation of learning activities having been shown to support learning objectives in many fields, there is traditionally little contextual information presented during laboratory experimentation. The paper argues that the enhancing laboratory activity with contextual information affords an opportunity to improve students' understanding of the relationship between the theoretical model and the experiment (which is effectively a proxy for the complex real world), thereby improving their understanding of the relationship between the model and reality. The authors propose that these improvements can be achieved by setting remote laboratories within context-rich virtual worlds.

  8. ‘My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment

    PubMed Central

    Kovacevic, Natasha; Ritter, Petra; Tays, William; Moreno, Sylvain; McIntosh, Anthony Randal

    2015-01-01

    While human brains are specialized for complex and variable real world tasks, most neuroscience studies reduce environmental complexity, which limits the range of behaviours that can be explored. Motivated to overcome this limitation, we conducted a large-scale experiment with electroencephalography (EEG) based brain-computer interface (BCI) technology as part of an immersive multi-media science-art installation. Data from 523 participants were collected in a single night. The exploratory experiment was designed as a collective computer game where players manipulated mental states of relaxation and concentration with neurofeedback targeting modulation of relative spectral power in alpha and beta frequency ranges. Besides validating robust time-of-night effects, gender differences and distinct spectral power patterns for the two mental states, our results also show differences in neurofeedback learning outcome. The unusually large sample size allowed us to detect unprecedented speed of learning changes in the power spectrum (~ 1 min). Moreover, we found that participants' baseline brain activity predicted subsequent neurofeedback beta training, indicating state-dependent learning. Besides revealing these training effects, which are relevant for BCI applications, our results validate a novel platform engaging art and science and fostering the understanding of brains under natural conditions. PMID:26154513

  9. Towards a virtual C. elegans: a framework for simulation and visualization of the neuromuscular system in a 3D physical environment.

    PubMed

    Palyanov, Andrey; Khayrulin, Sergey; Larson, Stephen D; Dibert, Alexander

    The nematode C. elegans is the only animal with a known neuronal wiring diagram, or "connectome". During the last three decades, extensive studies of the C. elegans have provided wide-ranging data about it, but few systematic ways of integrating these data into a dynamic model have been put forward. Here we present a detailed demonstration of a virtual C. elegans aimed at integrating these data in the form of a 3D dynamic model operating in a simulated physical environment. Our current demonstration includes a realistic flexible worm body model, muscular system and a partially implemented ventral neural cord. Our virtual C. elegans demonstrates successful forward and backward locomotion when sending sinusoidal patterns of neuronal activity to groups of motor neurons. To account for the relatively slow propagation velocity and the attenuation of neuronal signals, we introduced "pseudo neurons" into our model to simulate simplified neuronal dynamics. The pseudo neurons also provide a good way of visualizing the nervous system's structure and activity dynamics. PMID:22935967

  10. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  11. Conformal Visualization for Partially-Immersive Platforms

    PubMed Central

    Petkov, Kaloian; Papadopoulos, Charilaos; Zhang, Min; Kaufman, Arie E.; Gu, Xianfeng

    2010-01-01

    Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review. PMID:26279083

  12. Comparative usability studies of full vs. partial immersive virtual reality simulation for medical education and training.

    PubMed

    Pierce, Jennifer; Gutiérrez, Fátima; Vergara, Víctor M; Alverson, Dale C; Qualls, Clifford; Saland, Linda; Goldsmith, Timothy; Caudell, Thomas Preston

    2008-01-01

    Virtual reality (VR) simulation provides a means of making experiential learning reproducible and reusable. This study was designed to determine the efficiency and satisfaction components of usability. Previously, it was found that first year medical students using a VR simulation for medical education demonstrated effectiveness in learning as measured by knowledge structure improvements both with and without a head mounted display (HMD) but students using a HMD showed statistically greater improvement in knowledge structures compared to those not using a HMD. However, in this current analysis of other components of usability, there were no overall significance differences in efficiency (ease of use), nor in satisfaction, within this same group of randomized subjects comparing students using a HMD to those not using a HMD. These types of studies may be important in determining the most appropriate, cost effective VR simulation technology needed to achieve specific learning goals and objectives. PMID:18391324

  13. Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment.

    PubMed

    Afonso, Amandine; Blum, Alan; Katz, Brian F G; Tarroux, Philippe; Borst, Grégoire; Denis, Michel

    2010-07-01

    When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants. PMID:20551339

  14. Resting-state fMRI activity predicts unsupervised learning and memory in an immersive virtual reality environment.

    PubMed

    Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T

    2014-01-01

    In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145

  15. Resting-State fMRI Activity Predicts Unsupervised Learning and Memory in an Immersive Virtual Reality Environment

    PubMed Central

    Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T.

    2014-01-01

    In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145

  16. Eliciting Affect via Immersive Virtual Reality: A Tool for Adolescent Risk Reduction

    PubMed Central

    Houck, Christopher D.; Barker, David H.; Garcia, Abbe Marrs; Spitalnick, Josh S.; Curtis, Virginia; Roye, Scott; Brown, Larry K.

    2014-01-01

    Objective A virtual reality environment (VRE) was designed to expose participants to substance use and sexual risk-taking cues to examine the utility of VR in eliciting adolescent physiological arousal. Methods 42 adolescents (55% male) with a mean age of 14.54 years (SD = 1.13) participated. Physiological arousal was examined through heart rate (HR), respiratory sinus arrhythmia (RSA), and self-reported somatic arousal. A within-subject design (neutral VRE, VR party, and neutral VRE) was utilized to examine changes in arousal. Results The VR party demonstrated an increase in physiological arousal relative to a neutral VRE. Examination of individual segments of the party (e.g., orientation, substance use, and sexual risk) demonstrated that HR was significantly elevated across all segments, whereas only the orientation and sexual risk segments demonstrated significant impact on RSA. Conclusions This study provides preliminary evidence that VREs can be used to generate physiological arousal in response to substance use and sexual risk cues. PMID:24365699

  17. The effects of actual human size display and stereoscopic presentation on users' sense of being together with and of psychological immersion in a virtual character.

    PubMed

    Ahn, Dohyun; Seo, Youngnam; Kim, Minkyung; Kwon, Joung Huem; Jung, Younbo; Ahn, Jungsun; Lee, Doohwang

    2014-07-01

    This study examined the role of display size and mode in increasing users' sense of being together with and of their psychological immersion in a virtual character. Using a high-resolution three-dimensional virtual character, this study employed a 2×2 (stereoscopic mode vs. monoscopic mode×actual human size vs. small size display) factorial design in an experiment with 144 participants randomly assigned to each condition. Findings showed that stereoscopic mode had a significant effect on both users' sense of being together and psychological immersion. However, display size affected only the sense of being together. Furthermore, display size was not found to moderate the effect of stereoscopic mode. PMID:24606057

  18. The Effects of Actual Human Size Display and Stereoscopic Presentation on Users' Sense of Being Together with and of Psychological Immersion in a Virtual Character

    PubMed Central

    Ahn, Dohyun; Seo, Youngnam; Kim, Minkyung; Kwon, Joung Huem; Jung, Younbo; Ahn, Jungsun

    2014-01-01

    Abstract This study examined the role of display size and mode in increasing users' sense of being together with and of their psychological immersion in a virtual character. Using a high-resolution three-dimensional virtual character, this study employed a 2×2 (stereoscopic mode vs. monoscopic mode×actual human size vs. small size display) factorial design in an experiment with 144 participants randomly assigned to each condition. Findings showed that stereoscopic mode had a significant effect on both users' sense of being together and psychological immersion. However, display size affected only the sense of being together. Furthermore, display size was not found to moderate the effect of stereoscopic mode. PMID:24606057

  19. A method for generating an illusion of backwards time travel using immersive virtual reality—an exploratory study

    PubMed Central

    Friedman, Doron; Pizarro, Rodrigo; Or-Berkers, Keren; Neyret, Solène; Pan, Xueni; Slater, Mel

    2014-01-01

    We introduce a new method, based on immersive virtual reality (IVR), to give people the illusion of having traveled backwards through time to relive a sequence of events in which they can intervene and change history. The participant had played an important part in events with a tragic outcome—deaths of strangers—by having to choose between saving 5 people or 1. We consider whether the ability to go back through time, and intervene, to possibly avoid all deaths, has an impact on how the participant views such moral dilemmas, and also whether this experience leads to a re-evaluation of past unfortunate events in their own lives. We carried out an exploratory study where in the “Time Travel” condition 16 participants relived these events three times, seeing incarnations of their past selves carrying out the actions that they had previously carried out. In a “Repetition” condition another 16 participants replayed the same situation three times, without any notion of time travel. Our results suggest that those in the Time Travel condition did achieve an illusion of “time travel” provided that they also experienced an illusion of presence in the virtual environment, body ownership, and agency over the virtual body that substituted their own. Time travel produced an increase in guilt feelings about the events that had occurred, and an increase in support of utilitarian behavior as the solution to the moral dilemma. Time travel also produced an increase in implicit morality as judged by an implicit association test. The time travel illusion was associated with a reduction of regret associated with bad decisions in their own lives. The results show that when participants have a third action that they can take to solve the moral dilemma (that does not immediately involve choosing between the 1 and the 5) then they tend to take this option, even though it is useless in solving the dilemma, and actually results in the deaths of a greater number. PMID:25228889

  20. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  1. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  2. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  3. A common feature-based 3D-pharmacophore model generation and virtual screening: identification of potential PfDHFR inhibitors.

    PubMed

    Adane, Legesse; Bharatam, Prasad V; Sharma, Vikas

    2010-10-01

    A four-feature 3D-pharmacophore model was built from a set of 24 compounds whose activities were reported against the V1/S strain of the Plasmodium falciparum dihydrofolate reductase (PfDHFR) enzyme. This is an enzyme harboring Asn51Ile + Cys59Arg + Ser108Asn + Ile164Leu mutations. The HipHop module of the Catalyst program was used to generate the model. Selection of the best model among the 10 hypotheses generated by HipHop was carried out based on rank and best-fit values or alignments of the training set compounds onto a particular hypothesis. The best model (hypo1) consisted of two H-bond donors, one hydrophobic aromatic, and one hydrophobic aliphatic features. Hypo1 was used as a query to virtually screen Maybridge2004 and NCI2000 databases. The hits obtained from the search were subsequently subjected to FlexX and Glide docking studies. Based on the binding scores and interactions in the active site of quadruple-mutant PfDHFR, a set of nine hits were identified as potential inhibitors. PMID:19995305

  4. 2.5D/3D Models for the enhancement of architectural-urban heritage. An Virtual Tour of design of the Fascist headquarters in Littoria

    NASA Astrophysics Data System (ADS)

    Ippoliti, E.; Calvano, M.; Mores, L.

    2014-05-01

    Enhancement of cultural heritage is not simply a matter of preserving material objects but comes full circle only when the heritage can be enjoyed and used by the community. This is the rationale behind this presentation: an urban Virtual Tour to explore the 1937 design of the Fascist Headquarters in Littoria, now part of Latina, by the architect Oriolo Frezzotti. Although the application is deliberately "simple", it was part of a much broader framework of goals. One such goal was to create "friendly and perceptively meaningful" interfaces by integrating different "3D models" and so enriching. In fact, by exploiting the activation of natural mechanisms of visual perception and the ensuing emotional emphasis associated with vision, the illusionistic simulation of the scene facilitates access to the data even for "amateur" users. A second goal was to "contextualise the information" on which the concept of cultural heritage is based. In the application, communication of the heritage is linked to its physical and linguistic context; the latter is then used as a basis from which to set out to explore and understand the historical evidence. A third goal was to foster the widespread dissemination and sharing of this heritage of knowledge. On the one hand we worked to make the application usable from the Web, on the other, we established a reliable, rapid operational procedure with high quality processed data and ensuing contents. The procedure was also repeatable on a large scale.

  5. BLUI: a body language user interface for 3D gestural drawing

    NASA Astrophysics Data System (ADS)

    Brody, Arthur W.; Hartman, Chris

    1999-05-01

    We are developing a system to implement gestural drawing in an immersive 3D environment. We present a virtual artist who draws expressive forms in virtual space. In the art world, the term 'gestural' commonly refers to mark making that drives from the richness of movement of the artist. This focus on the character of motion is much like a similar focus on follow-through in athletic activity. Accordingly, we base the appearance of the rendered image on the body language of the artists, hence the acronym BLUI. BLUI is developed on the ImmersaDESK, an immersive virtual reality environment where the artists wears head-tracking goggles and uses a wand. Information form video, wand, and head tracker is used to generate a virtual artist, whose brush tracks with the wand.

  6. Spatial working memory in immersive virtual reality foraging: path organization, traveling distance and search efficiency in humans (Homo sapiens).

    PubMed

    De Lillo, Carlo; Kirby, Melissa; James, Frances C

    2014-05-01

    Search and serial recall tasks were used in the present study to characterize the factors affecting the ability of humans to keep track of a set of spatial locations while traveling in an immersive virtual reality foraging environment. The first experiment required the exhaustive exploration of a set of locations following a procedure previously used with other primate and non-primate species to assess their sensitivity to the geometric arrangement of foraging sites. The second experiment assessed the dependency of search performance on search organization by requiring the participants to recall specific trajectories throughout the foraging space. In the third experiment, the distance between the foraging sites was manipulated in order to contrast the effects of organization and traveling distance on recall accuracy. The results show that humans benefit from the use of organized search patterns when attempting to monitor their travel though either a clustered "patchy" space or a matrix of locations. Their ability to recall a series of locations is dependent on whether the order in which they are explored conformed or did not conform to specific organization principles. Moreover, the relationship between search efficiency and search organization is not confounded by effects of traveling distance. These results indicate that in humans, organizational factors may play a large role in their ability to forage efficiently. The extent to which such dependency may pertain to other primates and could be accounted for by visual organization processes is discussed on the basis of previous studies focused on perceptual grouping, search, and serial recall in non-human species. PMID:24038208

  7. Learning immersion without getting wet

    NASA Astrophysics Data System (ADS)

    Aguilera, Julieta C.

    2012-03-01

    This paper describes the teaching of an immersive environments class on the Spring of 2011. The class had students from undergraduate as well as graduate art related majors. Their digital background and interests were also diverse. These variables were channeled as different approaches throughout the semester. Class components included fundamentals of stereoscopic computer graphics to explore spatial depth, 3D modeling and skeleton animation to in turn explore presence, exposure to formats like a stereo projection wall and dome environments to compare field of view across devices, and finally, interaction and tracking to explore issues of embodiment. All these components were supported by theoretical readings discussed in class. Guest artists presented their work in Virtual Reality, Dome Environments and other immersive formats. Museum professionals also introduced students to space science visualizations, which utilize immersive formats. Here I present the assignments and their outcome, together with insights as to how the creation of immersive environments can be learned through constraints that expose students to situations of embodied cognition.

  8. The Responses of Medical General Practitioners to Unreasonable Patient Demand for Antibiotics - A Study of Medical Ethics Using Immersive Virtual Reality

    PubMed Central

    Pan, Xueni; Slater, Mel; Beacco, Alejandro; Navarro, Xavi; Bellido Rivas, Anna I.; Swapp, David; Hale, Joanna; Forbes, Paul Alexander George; Denvir, Catrina; de C. Hamilton, Antonia F.; Delacroix, Sylvie

    2016-01-01

    Background Dealing with insistent patient demand for antibiotics is an all too common part of a General Practitioner’s daily routine. This study explores the extent to which portable Immersive Virtual Reality technology can help us gain an accurate understanding of the factors that influence a doctor’s response to the ethical challenge underlying such tenacious requests for antibiotics (given the threat posed by growing anti-bacterial resistance worldwide). It also considers the potential of such technology to train doctors to face such dilemmas. Experiment Twelve experienced GPs and nine trainees were confronted with an increasingly angry demand by a woman to prescribe antibiotics to her mother in the face of inconclusive evidence that such antibiotic prescription is necessary. The daughter and mother were virtual characters displayed in immersive virtual reality. The specific purposes of the study were twofold: first, whether experienced GPs would be more resistant to patient demands than the trainees, and second, to investigate whether medical doctors would take the virtual situation seriously. Results Eight out of the 9 trainees prescribed the antibiotics, whereas 7 out of the 12 GPs did so. On the basis of a Bayesian analysis, these results yield reasonable statistical evidence in favor of the notion that experienced GPs are more likely to withstand the pressure to prescribe antibiotics than trainee doctors, thus answering our first question positively. As for the second question, a post experience questionnaire assessing the participants’ level of presence (together with participants’ feedback and body language) suggested that overall participants did tend towards the illusion of being in the consultation room depicted in the virtual reality and that the virtual consultation taking place was really happening. PMID:26889676

  9. Learning as Immersive Experiences: Using the Four-Dimensional Framework for Designing and Evaluating Immersive Learning Experiences in a Virtual World

    ERIC Educational Resources Information Center

    de Freitas, Sara; Rebolledo-Mendez, Genaro; Liarokapis, Fotis; Magoulas, George; Poulovassilis, Alexandra

    2010-01-01

    Traditional approaches to learning have often focused upon knowledge transfer strategies that have centred on textually-based engagements with learners, and dialogic methods of interaction with tutors. The use of virtual worlds, with text-based, voice-based and a feeling of "presence" naturally is allowing for more complex social interactions and…

  10. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 5 Report: Generation IV Reactor Virtual Mockup Proof-of-Principle Study

    SciTech Connect

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 5 report is part of a 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. Created a virtual mockup of PBMR reactor cavity and discussed applications of virtual mockup technology to improve Gen IV design review, construction planning, and maintenance planning.

  11. Knowledge and Valorization of Historical Sites Through 3d Documentation and Modeling

    NASA Astrophysics Data System (ADS)

    Farella, E.; Menna, F.; Nocerino, E.; Morabito, D.; Remondino, F.; Campi, M.

    2016-06-01

    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location.

  12. History Educators and the Challenge of Immersive Pasts: A Critical Review of Virtual Reality "Tools" and History Pedagogy

    ERIC Educational Resources Information Center

    Allison, John

    2008-01-01

    This paper will undertake a critical review of the impact of virtual reality tools on the teaching of history. Virtual reality is useful in several different ways. History educators, elementary and secondary school teachers and professors, can all profit from the digital environment. Challenges arise quickly however. Virtual reality technologies…

  13. SciEthics Interactive: Science and Ethics Learning in a Virtual Environment

    ERIC Educational Resources Information Center

    Nadolny, Larysa; Woolfrey, Joan; Pierlott, Matthew; Kahn, Seth

    2013-01-01

    Learning in immersive 3D environments allows students to collaborate, build, and interact with difficult course concepts. This case study examines the design and development of the TransGen Island within the SciEthics Interactive project, a National Science Foundation-funded, 3D virtual world emphasizing learning science content in the context of…

  14. Digital Planetariums and Immersive Visualizations for Astronomy Education

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Sahami, K.

    2015-11-01

    Modern “fulldome” video digital planetariums combine immersive projection that facilitate the understanding of relationships involving wide spatial angles, while 3D virtual environments facilitate learning of spatial relationships by allowing models and scenes to be viewed from multiple frames of reference. We report on an efficacy study of the use of digital planetariums for learning the astronomical topic of the seasons. Comparison of curriculum tests taken immediately after instruction versus pre-instruction show significant gains for students who viewed visualizations in the immersive dome, versus their counterparts who viewed non-immersive content and those in the control group that saw no visualizations. The greater gains in learning in the digital planetarium can be traced not only to its ability to show wide-angle phenomena and the benefits accorded by the simulation software, but also the lower quality visual experience for students viewing the non-immersive versions of the lectures.

  15. A novel semi-immersive virtual reality visuo-motor task activates ventrolateral prefrontal cortex: a functional near-infrared spectroscopy study

    NASA Astrophysics Data System (ADS)

    Basso Moro, Sara; Carrieri, Marika; Avola, Danilo; Brigadoi, Sabrina; Lancia, Stefania; Petracca, Andrea; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina

    2016-06-01

    Objective. In the last few years, the interest in applying virtual reality systems for neurorehabilitation is increasing. Their compatibility with neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), allows for the investigation of brain reorganization with multimodal stimulation and real-time control of the changes occurring in brain activity. The present study was aimed at testing a novel semi-immersive visuo-motor task (VMT), which has the features of being adopted in the field of neurorehabilitation of the upper limb motor function. Approach. A virtual environment was simulated through a three-dimensional hand-sensing device (the LEAP Motion Controller), and the concomitant VMT-related prefrontal cortex (PFC) response was monitored non-invasively by fNIRS. Upon the VMT, performed at three different levels of difficulty, it was hypothesized that the PFC would be activated with an expected greater level of activation in the ventrolateral PFC (VLPFC), given its involvement in the motor action planning and in the allocation of the attentional resources to generate goals from current contexts. Twenty-one subjects were asked to move their right hand/forearm with the purpose of guiding a virtual sphere over a virtual path. A twenty-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb/HHb, respectively). Main results. A VLPFC O2Hb increase and a concomitant HHb decrease were observed during the VMT performance, without any difference in relation to the task difficulty. Significance. The present study has revealed a particular involvement of the VLPFC in the execution of the novel proposed semi-immersive VMT adoptable in the neurorehabilitation field.

  16. The Development of a Virtual 3D Model of the Renal Corpuscle from Serial Histological Sections for E-Learning Environments

    ERIC Educational Resources Information Center

    Roth, Jeremy A.; Wilson, Timothy D.; Sandig, Martin

    2015-01-01

    Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated…

  17. Recent Advances in Immersive Visualization of Ocean Data: Virtual Reality Through the Web on Your Laptop Computer

    NASA Astrophysics Data System (ADS)

    Hermann, A. J.; Moore, C.; Soreide, N. N.

    2002-12-01

    Ocean circulation is irrefutably three dimensional, and powerful new measurement technologies and numerical models promise to expand our three-dimensional knowledge of the dynamics further each year. Yet, most ocean data and model output is still viewed using two-dimensional maps. Immersive visualization techniques allow the investigator to view their data as a three dimensional world of surfaces and vectors which evolves through time. The experience is not unlike holding a part of the ocean basin in one's hand, turning and examining it from different angles. While immersive, three dimensional visualization has been possible for at least a decade, the technology was until recently inaccessible (both physically and financially) for most researchers. It is not yet fully appreciated by practicing oceanographers how new, inexpensive computing hardware and software (e.g. graphics cards and controllers designed for the huge PC gaming market) can be employed for immersive, three dimensional, color visualization of their increasingly huge datasets and model output. In fact, the latest developments allow immersive visualization through web servers, giving scientists the ability to "fly through" three-dimensional data stored half a world away. Here we explore what additional insight is gained through immersive visualization, describe how scientists of very modest means can easily avail themselves of the latest technology, and demonstrate its implementation on a web server for Pacific Ocean model output.

  18. Why 3D? The Need for Solution Based Modeling in a National Geoscience Organization.

    NASA Astrophysics Data System (ADS)

    Terrington, Ricky; Napier, Bruce; Howard, Andy; Ford, Jon; Hatton, William

    2008-05-01

    In recent years national geoscience organizations have increasingly utilized 3D model data as an output to the stakeholder community. Advances in both software and hardware have led to an increasing use of 3D depictions of geoscience data alongside the standard 2D data formats such as maps and GIS data. By characterizing geoscience data in 3D, knowledge transfer between geoscientists and stakeholders is improved as the mindset and thought processes are communicated more effectively in a 3D model than in a 2D flat file format. 3D models allow the user to understand the conceptual basis of the 2D data and aids the decision making process at local, regional and national scales. Some of these issues include foundation and engineering conditions, ground water vulnerability, aquifer recharge and flow, and resource extraction and storage. The British Geological Survey has established a mechanism and infrastructure through the Digital Geoscience Spatial Model Programme (DGSM) to produce these types of 3D geoscience outputs. This cyber-infrastructure not only allows good data and information management, it enables geoscientists to capture their know-how and implicit and tacit knowledge for their 3D interpretations. A user of this data will then have access to value-added information for the 3D dataset including the knowledge, approach, inferences, uncertainty, wider context and best practice acquired during the 3D interpretation. To complement this cyber-infrastructure, an immersive 3D Visualization Facility was constructed at the British Geological Survey offices in Keyworth, Nottingham and Edinburgh. These custom built facilities allow stereo projection of geoscience data, immersing the users and stakeholders in a wealth of 3D geological data. Successful uses of these facilities include collaborative 3D modeling, demonstrations to public stakeholders and Virtual Field Mapping Reconnaissance.

  19. IQ-Station: A Low Cost Portable Immersive Environment

    SciTech Connect

    Eric Whiting; Patrick O'Leary; William Sherman; Eric Wernert

    2010-11-01

    The emergence of inexpensive 3D TV’s, affordable input and rendering hardware and open-source software has created a yeasty atmosphere for the development of low-cost immersive environments (IE). A low cost IE system, or IQ-station, fashioned from commercial off the shelf technology (COTS), coupled with a targeted immersive application can be a viable laboratory instrument for enhancing scientific workflow for exploration and analysis. The use of an IQ-station in a laboratory setting also has the potential of quickening the adoption of a more sophisticated immersive environment as a critical enabler in modern scientific and engineering workflows. Prior work in immersive environments generally required either a head mounted display (HMD) system or a large projector-based implementation both of which have limitations in terms of cost, usability, or space requirements. The solution presented here provides an alternative platform providing a reasonable immersive experience that addresses those limitations. Our work brings together the needed hardware and software to create a fully integrated immersive display and interface system that can be readily deployed in laboratories and common workspaces. By doing so, it is now feasible for immersive technologies to be included in researchers’ day-to-day workflows. The IQ-Station sets the stage for much wider adoption of immersive environments outside the small communities of virtual reality centers.

  20. Exploring Ecosystems from the Inside: How Immersive Multi-user Virtual Environments Can Support Development of Epistemologically Grounded Modeling Practices in Ecosystem Science Instruction

    NASA Astrophysics Data System (ADS)

    Kamarainen, Amy M.; Metcalf, Shari; Grotzer, Tina; Dede, Chris

    2015-04-01

    Recent reform efforts and the next generation science standards emphasize the importance of incorporating authentic scientific practices into science instruction. Modeling can be a particularly challenging practice to address because modeling occurs within a socially structured system of representation that is specific to a domain. Further, in the process of modeling, experts interact deeply with domain-specific content knowledge and integrate modeling with other scientific practices in service of a larger investigation. It can be difficult to create learning experiences enabling students to engage in modeling practices that both honor the position of the novice along a spectrum toward more expert understanding and align well with the practices and reasoning used by experts in the domain. In this paper, we outline the challenges in teaching modeling practices specific to the domain of ecosystem science, and we present a description of a curriculum built around an immersive virtual environment that offers unique affordances for supporting student engagement in modeling practices. Illustrative examples derived from pilot studies suggest that the tools and context provided within the immersive virtual environment helped support student engagement in modeling practices that are epistemologically grounded in the field of ecosystem science.

  1. Analyzing Visitors' Discourse, Attitudes, Perceptions, and Knowledge Acquisition in an Art Museum Tour after Using a 3D Virtual Environment

    ERIC Educational Resources Information Center

    D'Alba, Adriana

    2012-01-01

    The main purpose of this mixed methods research was to explore and analyze visitors' overall experience while they attended a museum exhibition, and examine how this experience was affected by previously using a virtual 3dimensional representation of the museum itself. The research measured knowledge acquisition in a virtual museum, and…

  2. Academic Library Services in Virtual Worlds: An Examination of the Potential for Library Services in Immersive Environments

    ERIC Educational Resources Information Center

    Ryan, Jenna; Porter, Marjorie; Miller, Rebecca

    2010-01-01

    Current literature on libraries is abundant with articles about the uses and the potential of new interactive communication technology, including Web 2.0 tools. Recently, the advent and use of virtual worlds have received top billing in these works. Many library institutions are exploring these virtual environments; this exploration and the…

  3. The Use of Immersive Virtual Reality in the Learning Sciences: Digital Transformations of Teachers, Students, and Social Context

    ERIC Educational Resources Information Center

    Bailenson, Jeremy N.; Yee, Nick; Blascovich, Jim; Beall, Andrew C.; Lundblad, Nicole; Jin, Michael

    2008-01-01

    This article illustrates the utility of using virtual environments to transform social interaction via behavior and context, with the goal of improving learning in digital environments. We first describe the technology and theories behind virtual environments and then report data from 4 empirical studies. In Experiment 1, we demonstrated that…

  4. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  5. Exploring the Instruction of Fluid Dynamics Concepts in an Immersive Virtual Environment: A Case Study of Pedagogical Strategies

    ERIC Educational Resources Information Center

    Lio, Cindy; Mazur, Joan

    2004-01-01

    The deployment of immersive, non-restrictive environments for instruction and learning presents a new set of challenges for instructional designers and educators. Adopting the conceptual frameworks of Sherin's (2002) learning while teaching and Vygotsky's (1978) cultural development via the mediation of tools, this paper explores one professor's…

  6. Cue combination for 3D location judgements

    PubMed Central

    Svarverud, Ellen; Gilson, Stuart J.; Glennerster, Andrew

    2010-01-01

    Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only ‘physical’ (stereo and motion parallax) or ‘texture-based’ cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the target relative to other objects was varied, the ratio of ‘physical’ to ‘texture-based’ thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying traditional models of 3D reconstruction. PMID:20143898

  7. Cue combination for 3D location judgements.

    PubMed

    Svarverud, Ellen; Gilson, Stuart J; Glennerster, Andrew

    2010-01-01

    Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the target relative to other objects was varied, the ratio of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying traditional models of 3D reconstruction. PMID:20143898

  8. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  9. Second Life in Higher Education: Assessing the Potential for and the Barriers to Deploying Virtual Worlds in Learning and Teaching

    ERIC Educational Resources Information Center

    Warburton, Steven

    2009-01-01

    "Second Life" (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by…

  10. Age and gestural differences in the ease of rotating a virtual 3D image on a large, multi-touch screen.

    PubMed

    Ku, Chao-Jen; Chen, Li-Chieh

    2013-04-01

    Providing a natural mapping between multi-touch gestures and manipulations of digital content is important for user-friendly interfaces. Although there are some guidelines for 2D digital content available in the literature, a guideline for manipulation of 3D content has yet to be developed. In this research, two sets of gestures were developed for experiments in the ease of manipulating 3D content on a touchscreen. As there typically are large differences between age groups in the ease of learning new interfaces, we compared a group of adults with a group of children. Each person carried out three tasks linked to rotating the digital model of a green turtle to inspect major characteristics of its body. Task completion time, subjective evaluations, and gesture changing frequency were measured. Results showed that using the conventional gestures for 2D object rotation was not appropriate in the 3D environment. Gestures that required multiple touch points hampered the real-time visibility of rotational effects on a large screen. While the cumulative effects of 3D rotations became complicated after intensive operations, simpler gestures facilitated the mapping between 2D control movements and 3D content displays. For rotation in Cartesian coordinates, moving one fingertip horizontally or vertically on a 2D touchscreen corresponded to the rotation angles of two axes for 3D content, while the relative movement between two fingertips was used to control the rotation angleof the third axis. Based on behavior analysis, adults and children differed in the diversity of gesture types and in the touch points with respect to the object's contours. Offering a robust mechanism for gestural inputs is necessary for universal control of such a system. PMID:24032318

  11. Virtual Prototyping at CERN

    NASA Astrophysics Data System (ADS)

    Gennaro, Silvano De

    The VENUS (Virtual Environment Navigation in the Underground Sites) project is probably the largest Virtual Reality application to Engineering design in the world. VENUS is just over one year old and offers a fully immersive and stereoscopic "flythru" of the LHC pits for the proposed experiments, including the experimental area equipment and the surface models that are being prepared for a territorial impact study. VENUS' Virtual Prototypes are an ideal replacement for the wooden models traditionally build for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they are totally reliable, they can be updated in a matter of minutes, and they allow designers to explore them from inside, in a one-to-one scale. Navigation can be performed on the computer screen, on a stereoscopic large projection screen, or in immersive conditions, with an helmet and 3D mouse. By using specialised collision detection software, the computer can find optimal paths to lower each detector part into the pits and position it to destination, letting us visualize the whole assembly probess. During construction, these paths can be fed to a robot controller, which can operate the bridge cranes and build LHC almost without human intervention. VENUS is currently developing a multiplatform VR browser that will let the whole HEP community access LHC's Virtual Protoypes over the web. Many interesting things took place during the conference on Virtual Reality. For more information please refer to the Virtual Reality section.

  12. Generation IV Nuclear Energy Systems Construction Cost Reductions through the Use of Virtual Environments - Task 4 Report: Virtual Mockup Maintenance Task Evaluation

    SciTech Connect

    Timothy Shaw; Anthony Baratta; Vaughn Whisker

    2005-02-28

    Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.

  13. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  14. Overestimation of heights in virtual reality is influenced more by perceived distal size than by the 2-D versus 3-D dimensionality of the display

    NASA Technical Reports Server (NTRS)

    Dixon, Melissa W.; Proffitt, Dennis R.; Kaiser, M. K. (Principal Investigator)

    2002-01-01

    One important aspect of the pictorial representation of a scene is the depiction of object proportions. Yang, Dixon, and Proffitt (1999 Perception 28 445-467) recently reported that the magnitude of the vertical-horizontal illusion was greater for vertical extents presented in three-dimensional (3-D) environments compared to two-dimensional (2-D) displays. However, because all of the 3-D environments were large and all of the 2-D displays were small, the question remains whether the observed magnitude differences were due solely to the dimensionality of the displays (2-D versus 3-D) or to the perceived distal size of the extents (small versus large). We investigated this question by comparing observers' judgments of vertical relative to horizontal extents on a large but 2-D display compared to the large 3-D and the small 2-D displays used by Yang et al (1999). The results confirmed that the magnitude differences for vertical overestimation between display media are influenced more by the perceived distal object size rather than by the dimensionality of the display.

  15. PC-Based Virtual Reality for CAD Model Viewing

    ERIC Educational Resources Information Center

    Seth, Abhishek; Smith, Shana S.-F.

    2004-01-01

    Virtual reality (VR), as an emerging visualization technology, has introduced an unprecedented communication method for collaborative design. VR refers to an immersive, interactive, multisensory, viewer-centered, 3D computer-generated environment and the combination of technologies required to build such an environment. This article introduces the…

  16. Andragogical Characteristics and Expectations of University of Hawai'i Adult Learners in a 3D Multi-User Virtual Environment

    ERIC Educational Resources Information Center

    Meeder, Rebecca L.

    2012-01-01

    The purpose of this study was to discover which andragogical characteristics and expectations of adult learners manifested themselves in the three-dimensional, multi-user virtual environment known as Second Life. This digital ethnographic study focused specifically on adult students within the University of Hawai'i Second Life group and their…

  17. KinImmerse: Macromolecular VR for NMR ensembles

    PubMed Central

    Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C

    2009-01-01

    Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844

  18. The Best of All Worlds: Immersive Interfaces for Art Education in Virtual and Real World Teaching and Learning Environments

    ERIC Educational Resources Information Center

    Grenfell, Janette

    2013-01-01

    Selected ubiquitous technologies encourage collaborative participation between higher education students and educators within a virtual socially networked e-learning landscape. Multiple modes of teaching and learning, ranging from real world experiences, to text and digital images accessed within the Deakin studies online learning management…

  19. User Interface Technology Transfer to NASA's Virtual Wind Tunnel System

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1998-01-01

    Funded by NASA grants for four years, the Brown Computer Graphics Group has developed novel 3D user interfaces for desktop and immersive scientific visualization applications. This past grant period supported the design and development of a software library, the 3D Widget Library, which supports the construction and run-time management of 3D widgets. The 3D Widget Library is a mechanism for transferring user interface technology from the Brown Graphics Group to the Virtual Wind Tunnel system at NASA Ames as well as the public domain.

  20. Immersive Education, an Annotated Webliography

    ERIC Educational Resources Information Center

    Pricer, Wayne F.

    2011-01-01

    In this second installment of a two-part feature on immersive education a webliography will provide resources discussing the use of various types of computer simulations including: (a) augmented reality, (b) virtual reality programs, (c) gaming resources for teaching with technology, (d) virtual reality lab resources, (e) virtual reality standards…